Test Report: KVM_Linux_crio 21790

                    
                      0500345ed58569c501f3381e2b1a5a0e0bac6bd7:2025-10-27:42095
                    
                

Test fail (5/342)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.78
150 TestFunctional/parallel/ImageCommands/ImageRemove 3.71
244 TestPreload 157.75
252 TestKubernetesUpgrade 935.35
267 TestPause/serial/SecondStartNoReconfiguration 47.72
x
+
TestAddons/parallel/Ingress (158.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-865238 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-865238 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-865238 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f8e9adf6-9ebd-4271-b241-a112a7898205] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f8e9adf6-9ebd-4271-b241-a112a7898205] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003878167s
I1027 21:52:50.170937  356621 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-865238 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.16018867s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-865238 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.175
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-865238 -n addons-865238
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 logs -n 25: (1.455694187s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-598387                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-598387 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │ 27 Oct 25 21:49 UTC │
	│ start   │ --download-only -p binary-mirror-176666 --alsologtostderr --binary-mirror http://127.0.0.1:36419 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-176666 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │                     │
	│ delete  │ -p binary-mirror-176666                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-176666 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │ 27 Oct 25 21:49 UTC │
	│ addons  │ disable dashboard -p addons-865238                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │                     │
	│ addons  │ enable dashboard -p addons-865238                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │                     │
	│ start   │ -p addons-865238 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │ 27 Oct 25 21:51 UTC │
	│ addons  │ addons-865238 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:51 UTC │ 27 Oct 25 21:51 UTC │
	│ addons  │ addons-865238 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ enable headlamp -p addons-865238 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ ssh     │ addons-865238 ssh cat /opt/local-path-provisioner/pvc-9a3490da-d28f-4010-8838-0a8f9b29e40e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ ip      │ addons-865238 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:53 UTC │
	│ addons  │ addons-865238 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ ssh     │ addons-865238 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-865238                                                                                                                                                                                                                                                                                                                                                                                         │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:52 UTC │ 27 Oct 25 21:52 UTC │
	│ addons  │ addons-865238 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ addons  │ addons-865238 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ ip      │ addons-865238 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-865238        │ jenkins │ v1.37.0 │ 27 Oct 25 21:55 UTC │ 27 Oct 25 21:55 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:49:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:49:20.202972  357212 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:49:20.203303  357212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:20.203315  357212 out.go:374] Setting ErrFile to fd 2...
	I1027 21:49:20.203320  357212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:20.203506  357212 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 21:49:20.204078  357212 out.go:368] Setting JSON to false
	I1027 21:49:20.205258  357212 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5507,"bootTime":1761596253,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:49:20.205373  357212 start.go:143] virtualization: kvm guest
	I1027 21:49:20.207435  357212 out.go:179] * [addons-865238] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 21:49:20.209116  357212 notify.go:221] Checking for updates...
	I1027 21:49:20.209135  357212 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 21:49:20.210529  357212 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:49:20.212161  357212 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 21:49:20.213736  357212 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 21:49:20.215286  357212 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 21:49:20.217005  357212 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 21:49:20.218840  357212 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:49:20.251411  357212 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 21:49:20.252717  357212 start.go:307] selected driver: kvm2
	I1027 21:49:20.252739  357212 start.go:928] validating driver "kvm2" against <nil>
	I1027 21:49:20.252769  357212 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 21:49:20.253519  357212 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:49:20.253828  357212 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:49:20.253864  357212 cni.go:84] Creating CNI manager for ""
	I1027 21:49:20.253932  357212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 21:49:20.253943  357212 start_flags.go:335] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 21:49:20.253990  357212 start.go:351] cluster config:
	{Name:addons-865238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-865238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1027 21:49:20.254094  357212 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 21:49:20.255590  357212 out.go:179] * Starting "addons-865238" primary control-plane node in "addons-865238" cluster
	I1027 21:49:20.256956  357212 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:49:20.257020  357212 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 21:49:20.257031  357212 cache.go:59] Caching tarball of preloaded images
	I1027 21:49:20.257121  357212 preload.go:233] Found /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 21:49:20.257132  357212 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 21:49:20.257495  357212 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/config.json ...
	I1027 21:49:20.257522  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/config.json: {Name:mkea37af935f0e58e2a5d14e332d10eef4ee0efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:20.257710  357212 start.go:360] acquireMachinesLock for addons-865238: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 21:49:20.257760  357212 start.go:364] duration metric: took 34.239µs to acquireMachinesLock for "addons-865238"
	I1027 21:49:20.257780  357212 start.go:93] Provisioning new machine with config: &{Name:addons-865238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-865238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:49:20.257849  357212 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 21:49:20.259716  357212 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1027 21:49:20.259931  357212 start.go:159] libmachine.API.Create for "addons-865238" (driver="kvm2")
	I1027 21:49:20.259966  357212 client.go:173] LocalClient.Create starting
	I1027 21:49:20.260080  357212 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem
	I1027 21:49:20.746488  357212 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem
	I1027 21:49:21.070090  357212 main.go:143] libmachine: creating domain...
	I1027 21:49:21.070115  357212 main.go:143] libmachine: creating network...
	I1027 21:49:21.071578  357212 main.go:143] libmachine: found existing default network
	I1027 21:49:21.071833  357212 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 21:49:21.072524  357212 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f80c60}
	I1027 21:49:21.072652  357212 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-865238</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 21:49:21.079160  357212 main.go:143] libmachine: creating private network mk-addons-865238 192.168.39.0/24...
	I1027 21:49:21.159361  357212 main.go:143] libmachine: private network mk-addons-865238 192.168.39.0/24 created
	I1027 21:49:21.159717  357212 main.go:143] libmachine: <network>
	  <name>mk-addons-865238</name>
	  <uuid>b7fd9b32-a0f2-419a-9395-7898a10a44d5</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:0e:e6:60'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 21:49:21.159747  357212 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238 ...
	I1027 21:49:21.159768  357212 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21790-352679/.minikube/cache/iso/amd64/minikube-v1.37.0-1761414747-21797-amd64.iso
	I1027 21:49:21.159789  357212 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 21:49:21.159857  357212 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21790-352679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21790-352679/.minikube/cache/iso/amd64/minikube-v1.37.0-1761414747-21797-amd64.iso...
	I1027 21:49:21.440138  357212 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa...
	I1027 21:49:21.537062  357212 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/addons-865238.rawdisk...
	I1027 21:49:21.537115  357212 main.go:143] libmachine: Writing magic tar header
	I1027 21:49:21.537144  357212 main.go:143] libmachine: Writing SSH key tar header
	I1027 21:49:21.537219  357212 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238 ...
	I1027 21:49:21.537286  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238
	I1027 21:49:21.537331  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238 (perms=drwx------)
	I1027 21:49:21.537404  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube/machines
	I1027 21:49:21.537419  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube/machines (perms=drwxr-xr-x)
	I1027 21:49:21.537430  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 21:49:21.537439  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube (perms=drwxr-xr-x)
	I1027 21:49:21.537449  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679
	I1027 21:49:21.537457  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679 (perms=drwxrwxr-x)
	I1027 21:49:21.537469  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 21:49:21.537476  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 21:49:21.537495  357212 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1027 21:49:21.537506  357212 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 21:49:21.537516  357212 main.go:143] libmachine: checking permissions on dir: /home
	I1027 21:49:21.537522  357212 main.go:143] libmachine: skipping /home - not owner
	I1027 21:49:21.537527  357212 main.go:143] libmachine: defining domain...
	I1027 21:49:21.538927  357212 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-865238</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/addons-865238.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-865238'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 21:49:21.547582  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:e0:71:c0 in network default
	I1027 21:49:21.548350  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:21.548373  357212 main.go:143] libmachine: starting domain...
	I1027 21:49:21.548379  357212 main.go:143] libmachine: ensuring networks are active...
	I1027 21:49:21.549195  357212 main.go:143] libmachine: Ensuring network default is active
	I1027 21:49:21.549652  357212 main.go:143] libmachine: Ensuring network mk-addons-865238 is active
	I1027 21:49:21.550297  357212 main.go:143] libmachine: getting domain XML...
	I1027 21:49:21.551532  357212 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-865238</name>
	  <uuid>0c835d19-650b-4229-ba30-3221fdc749a7</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/addons-865238.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:60:65:7c'/>
	      <source network='mk-addons-865238'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e0:71:c0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 21:49:23.009713  357212 main.go:143] libmachine: waiting for domain to start...
	I1027 21:49:23.011218  357212 main.go:143] libmachine: domain is now running
	I1027 21:49:23.011242  357212 main.go:143] libmachine: waiting for IP...
	I1027 21:49:23.012148  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:23.012703  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:23.012724  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:23.013048  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:23.013106  357212 retry.go:31] will retry after 223.944361ms: waiting for domain to come up
	I1027 21:49:23.238814  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:23.239392  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:23.239410  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:23.239730  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:23.239770  357212 retry.go:31] will retry after 317.661922ms: waiting for domain to come up
	I1027 21:49:23.559474  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:23.560106  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:23.560127  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:23.560421  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:23.560467  357212 retry.go:31] will retry after 297.752351ms: waiting for domain to come up
	I1027 21:49:23.860162  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:23.860904  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:23.860928  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:23.861322  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:23.861365  357212 retry.go:31] will retry after 568.749646ms: waiting for domain to come up
	I1027 21:49:24.432249  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:24.432927  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:24.432951  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:24.433359  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:24.433402  357212 retry.go:31] will retry after 648.27733ms: waiting for domain to come up
	I1027 21:49:25.083429  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:25.084071  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:25.084090  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:25.084442  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:25.084486  357212 retry.go:31] will retry after 808.474445ms: waiting for domain to come up
	I1027 21:49:25.894463  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:25.895015  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:25.895031  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:25.895300  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:25.895351  357212 retry.go:31] will retry after 1.188363862s: waiting for domain to come up
	I1027 21:49:27.085953  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:27.086559  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:27.086579  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:27.086877  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:27.086946  357212 retry.go:31] will retry after 1.411806911s: waiting for domain to come up
	I1027 21:49:28.500637  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:28.501457  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:28.501478  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:28.501926  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:28.501971  357212 retry.go:31] will retry after 1.1631644s: waiting for domain to come up
	I1027 21:49:29.667429  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:29.667972  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:29.667989  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:29.668280  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:29.668324  357212 retry.go:31] will retry after 1.852548903s: waiting for domain to come up
	I1027 21:49:31.522945  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:31.523643  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:31.523670  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:31.524035  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:31.524102  357212 retry.go:31] will retry after 1.783060165s: waiting for domain to come up
	I1027 21:49:33.308733  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:33.309423  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:33.309444  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:33.309804  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:33.309850  357212 retry.go:31] will retry after 2.302860864s: waiting for domain to come up
	I1027 21:49:35.614082  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:35.614582  357212 main.go:143] libmachine: no network interface addresses found for domain addons-865238 (source=lease)
	I1027 21:49:35.614600  357212 main.go:143] libmachine: trying to list again with source=arp
	I1027 21:49:35.614953  357212 main.go:143] libmachine: unable to find current IP address of domain addons-865238 in network mk-addons-865238 (interfaces detected: [])
	I1027 21:49:35.614996  357212 retry.go:31] will retry after 3.578630542s: waiting for domain to come up
	I1027 21:49:39.197969  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.198710  357212 main.go:143] libmachine: domain addons-865238 has current primary IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.198732  357212 main.go:143] libmachine: found domain IP: 192.168.39.175
	I1027 21:49:39.198741  357212 main.go:143] libmachine: reserving static IP address...
	I1027 21:49:39.199272  357212 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-865238", mac: "52:54:00:60:65:7c", ip: "192.168.39.175"} in network mk-addons-865238
	I1027 21:49:39.457685  357212 main.go:143] libmachine: reserved static IP address 192.168.39.175 for domain addons-865238
	I1027 21:49:39.457730  357212 main.go:143] libmachine: waiting for SSH...
	I1027 21:49:39.457738  357212 main.go:143] libmachine: Getting to WaitForSSH function...
	I1027 21:49:39.461345  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.462028  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.462074  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.462378  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:39.462693  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:39.462712  357212 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1027 21:49:39.581404  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 21:49:39.581857  357212 main.go:143] libmachine: domain creation complete
	I1027 21:49:39.583954  357212 machine.go:94] provisionDockerMachine start ...
	I1027 21:49:39.587070  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.587456  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.587479  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.587689  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:39.587928  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:39.587939  357212 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 21:49:39.703309  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 21:49:39.703340  357212 buildroot.go:166] provisioning hostname "addons-865238"
	I1027 21:49:39.706578  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.707022  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.707052  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.707228  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:39.707439  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:39.707452  357212 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-865238 && echo "addons-865238" | sudo tee /etc/hostname
	I1027 21:49:39.843224  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-865238
	
	I1027 21:49:39.846796  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.847265  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.847291  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.847466  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:39.847716  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:39.847760  357212 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-865238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-865238/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-865238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 21:49:39.974971  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 21:49:39.975004  357212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 21:49:39.975071  357212 buildroot.go:174] setting up certificates
	I1027 21:49:39.975091  357212 provision.go:84] configureAuth start
	I1027 21:49:39.978603  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.979239  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.979285  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.982405  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.982978  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:39.983013  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:39.983199  357212 provision.go:143] copyHostCerts
	I1027 21:49:39.983294  357212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 21:49:39.983447  357212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 21:49:39.983545  357212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 21:49:39.983621  357212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.addons-865238 san=[127.0.0.1 192.168.39.175 addons-865238 localhost minikube]
	I1027 21:49:40.177346  357212 provision.go:177] copyRemoteCerts
	I1027 21:49:40.177428  357212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 21:49:40.179726  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.180270  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.180306  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.180450  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:49:40.269691  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 21:49:40.302631  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 21:49:40.336499  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 21:49:40.370923  357212 provision.go:87] duration metric: took 395.813629ms to configureAuth
	I1027 21:49:40.370957  357212 buildroot.go:189] setting minikube options for container-runtime
	I1027 21:49:40.371208  357212 config.go:182] Loaded profile config "addons-865238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:49:40.374662  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.375210  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.375249  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.375527  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:40.375803  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:40.375828  357212 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 21:49:40.638272  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 21:49:40.638309  357212 machine.go:97] duration metric: took 1.05433008s to provisionDockerMachine
	I1027 21:49:40.638323  357212 client.go:176] duration metric: took 20.378348543s to LocalClient.Create
	I1027 21:49:40.638347  357212 start.go:167] duration metric: took 20.37841929s to libmachine.API.Create "addons-865238"
	I1027 21:49:40.638357  357212 start.go:293] postStartSetup for "addons-865238" (driver="kvm2")
	I1027 21:49:40.638371  357212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 21:49:40.638463  357212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 21:49:40.642279  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.642798  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.642828  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.643010  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:49:40.734546  357212 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 21:49:40.740469  357212 info.go:137] Remote host: Buildroot 2025.02
	I1027 21:49:40.740514  357212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 21:49:40.740639  357212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 21:49:40.740695  357212 start.go:296] duration metric: took 102.331407ms for postStartSetup
	I1027 21:49:40.743999  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.744485  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.744519  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.744767  357212 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/config.json ...
	I1027 21:49:40.744983  357212 start.go:128] duration metric: took 20.48711769s to createHost
	I1027 21:49:40.747255  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.747764  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.747787  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.748003  357212 main.go:143] libmachine: Using SSH client type: native
	I1027 21:49:40.748211  357212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1027 21:49:40.748221  357212 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 21:49:40.863308  357212 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761601780.819960623
	
	I1027 21:49:40.863335  357212 fix.go:217] guest clock: 1761601780.819960623
	I1027 21:49:40.863344  357212 fix.go:230] Guest: 2025-10-27 21:49:40.819960623 +0000 UTC Remote: 2025-10-27 21:49:40.744995922 +0000 UTC m=+20.594864889 (delta=74.964701ms)
	I1027 21:49:40.863366  357212 fix.go:201] guest clock delta is within tolerance: 74.964701ms
	I1027 21:49:40.863374  357212 start.go:83] releasing machines lock for "addons-865238", held for 20.60560183s
	I1027 21:49:40.866793  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.867315  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.867348  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.868218  357212 ssh_runner.go:195] Run: cat /version.json
	I1027 21:49:40.868335  357212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 21:49:40.871534  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.871811  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.872075  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.872110  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.872288  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:40.872295  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:49:40.872318  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:40.872525  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:49:40.980749  357212 ssh_runner.go:195] Run: systemctl --version
	I1027 21:49:40.988128  357212 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 21:49:41.159558  357212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 21:49:41.167115  357212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 21:49:41.167202  357212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 21:49:41.190190  357212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 21:49:41.190223  357212 start.go:496] detecting cgroup driver to use...
	I1027 21:49:41.190327  357212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 21:49:41.212940  357212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 21:49:41.232523  357212 docker.go:218] disabling cri-docker service (if available) ...
	I1027 21:49:41.232616  357212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 21:49:41.252905  357212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 21:49:41.272625  357212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 21:49:41.426547  357212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 21:49:41.645129  357212 docker.go:234] disabling docker service ...
	I1027 21:49:41.645210  357212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 21:49:41.663266  357212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 21:49:41.680630  357212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 21:49:41.857742  357212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 21:49:42.017433  357212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 21:49:42.035172  357212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 21:49:42.061750  357212 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 21:49:42.061835  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.075861  357212 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 21:49:42.075961  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.090430  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.105171  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.120721  357212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 21:49:42.136388  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.151153  357212 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.174010  357212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:49:42.188258  357212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 21:49:42.200685  357212 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 21:49:42.200756  357212 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 21:49:42.224665  357212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 21:49:42.239413  357212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:49:42.388641  357212 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 21:49:42.515832  357212 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 21:49:42.515975  357212 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 21:49:42.522552  357212 start.go:564] Will wait 60s for crictl version
	I1027 21:49:42.522641  357212 ssh_runner.go:195] Run: which crictl
	I1027 21:49:42.527503  357212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 21:49:42.572194  357212 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 21:49:42.572327  357212 ssh_runner.go:195] Run: crio --version
	I1027 21:49:42.605390  357212 ssh_runner.go:195] Run: crio --version
	I1027 21:49:42.639766  357212 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 21:49:42.643847  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:42.644322  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:49:42.644361  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:49:42.644588  357212 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 21:49:42.649977  357212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:49:42.666733  357212 kubeadm.go:884] updating cluster {Name:addons-865238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-865238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 21:49:42.667000  357212 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:49:42.667076  357212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:49:42.704211  357212 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 21:49:42.704288  357212 ssh_runner.go:195] Run: which lz4
	I1027 21:49:42.709556  357212 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 21:49:42.715061  357212 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 21:49:42.715107  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 21:49:44.365336  357212 crio.go:462] duration metric: took 1.655817281s to copy over tarball
	I1027 21:49:44.365431  357212 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 21:49:46.068475  357212 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.703008061s)
	I1027 21:49:46.068531  357212 crio.go:469] duration metric: took 1.703155192s to extract the tarball
	I1027 21:49:46.068541  357212 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 21:49:46.114945  357212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:49:46.170035  357212 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 21:49:46.170061  357212 cache_images.go:86] Images are preloaded, skipping loading
	I1027 21:49:46.170070  357212 kubeadm.go:935] updating node { 192.168.39.175 8443 v1.34.1 crio true true} ...
	I1027 21:49:46.170183  357212 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-865238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-865238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 21:49:46.170260  357212 ssh_runner.go:195] Run: crio config
	I1027 21:49:46.220615  357212 cni.go:84] Creating CNI manager for ""
	I1027 21:49:46.220656  357212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 21:49:46.220677  357212 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 21:49:46.220702  357212 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-865238 NodeName:addons-865238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 21:49:46.220869  357212 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-865238"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.175"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 21:49:46.220960  357212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 21:49:46.235845  357212 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 21:49:46.235952  357212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 21:49:46.249765  357212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 21:49:46.273138  357212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 21:49:46.295948  357212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 21:49:46.320435  357212 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1027 21:49:46.325170  357212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:49:46.344487  357212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:49:46.498525  357212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:49:46.545049  357212 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238 for IP: 192.168.39.175
	I1027 21:49:46.545094  357212 certs.go:195] generating shared ca certs ...
	I1027 21:49:46.545117  357212 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:46.545331  357212 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 21:49:47.121321  357212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt ...
	I1027 21:49:47.121361  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt: {Name:mk92cfe33d02b963608987a65d2edb25eddfb550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.121608  357212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key ...
	I1027 21:49:47.121627  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key: {Name:mk2fad79d799c8f5f25a7945e16cb6cb6c19da25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.121753  357212 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 21:49:47.434491  357212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt ...
	I1027 21:49:47.434528  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt: {Name:mkc67bcf5b5e621fc898636778810ae2c83a7ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.434739  357212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key ...
	I1027 21:49:47.434759  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key: {Name:mkf7bed948918313ab5a7ba31b62db09d817b2ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.434877  357212 certs.go:257] generating profile certs ...
	I1027 21:49:47.434983  357212 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.key
	I1027 21:49:47.435006  357212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt with IP's: []
	I1027 21:49:47.566506  357212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt ...
	I1027 21:49:47.566547  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: {Name:mk9fe6072cb84f7aee4a886a6c99b084e7b5e7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.566777  357212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.key ...
	I1027 21:49:47.566795  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.key: {Name:mk42ae02631f8a4a6d3fc04ec6c20e313e95e31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.566917  357212 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key.b56b45d6
	I1027 21:49:47.566946  357212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt.b56b45d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.175]
	I1027 21:49:47.884641  357212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt.b56b45d6 ...
	I1027 21:49:47.884679  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt.b56b45d6: {Name:mke62cd869a791326679fa598e2125fd2b48bcf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.884902  357212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key.b56b45d6 ...
	I1027 21:49:47.884924  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key.b56b45d6: {Name:mk28d041a1b53afc867000e2ea56141482132521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:47.885037  357212 certs.go:382] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt.b56b45d6 -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt
	I1027 21:49:47.885138  357212 certs.go:386] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key.b56b45d6 -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key
	I1027 21:49:47.885214  357212 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.key
	I1027 21:49:47.885243  357212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.crt with IP's: []
	I1027 21:49:48.133061  357212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.crt ...
	I1027 21:49:48.133099  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.crt: {Name:mkdcfe871777cbe6a144ce515baac8440151fee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:48.133313  357212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.key ...
	I1027 21:49:48.133333  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.key: {Name:mk4b948a2c4f3e06afa586503bb5c4c9a5619979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:49:48.133558  357212 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 21:49:48.133616  357212 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 21:49:48.133657  357212 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 21:49:48.133693  357212 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 21:49:48.134395  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 21:49:48.179956  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 21:49:48.219397  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 21:49:48.258660  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 21:49:48.295300  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 21:49:48.332871  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 21:49:48.368872  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 21:49:48.405478  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 21:49:48.444825  357212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 21:49:48.482359  357212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 21:49:48.506243  357212 ssh_runner.go:195] Run: openssl version
	I1027 21:49:48.514001  357212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 21:49:48.529507  357212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:49:48.536097  357212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:49:48.536220  357212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:49:48.545249  357212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 21:49:48.561046  357212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 21:49:48.566685  357212 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 21:49:48.566747  357212 kubeadm.go:401] StartCluster: {Name:addons-865238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-865238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:49:48.566854  357212 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:49:48.566956  357212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:49:48.611528  357212 cri.go:89] found id: ""
	I1027 21:49:48.611613  357212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 21:49:48.625146  357212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 21:49:48.639262  357212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 21:49:48.653573  357212 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 21:49:48.653594  357212 kubeadm.go:158] found existing configuration files:
	
	I1027 21:49:48.653650  357212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 21:49:48.666392  357212 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 21:49:48.666473  357212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 21:49:48.680311  357212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 21:49:48.692967  357212 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 21:49:48.693036  357212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 21:49:48.706785  357212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 21:49:48.719553  357212 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 21:49:48.719618  357212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 21:49:48.734531  357212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 21:49:48.747534  357212 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 21:49:48.747641  357212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 21:49:48.763515  357212 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 21:49:48.947762  357212 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 21:50:02.175267  357212 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 21:50:02.175399  357212 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 21:50:02.175549  357212 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 21:50:02.175669  357212 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 21:50:02.175775  357212 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 21:50:02.175853  357212 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 21:50:02.177768  357212 out.go:252]   - Generating certificates and keys ...
	I1027 21:50:02.177873  357212 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 21:50:02.178000  357212 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 21:50:02.178145  357212 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 21:50:02.178233  357212 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 21:50:02.178332  357212 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 21:50:02.178403  357212 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 21:50:02.178476  357212 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 21:50:02.178704  357212 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-865238 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1027 21:50:02.178814  357212 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 21:50:02.178965  357212 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-865238 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1027 21:50:02.179042  357212 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 21:50:02.179141  357212 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 21:50:02.179209  357212 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 21:50:02.179262  357212 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 21:50:02.179312  357212 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 21:50:02.179369  357212 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 21:50:02.179416  357212 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 21:50:02.179470  357212 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 21:50:02.179624  357212 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 21:50:02.179732  357212 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 21:50:02.179808  357212 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 21:50:02.181566  357212 out.go:252]   - Booting up control plane ...
	I1027 21:50:02.181663  357212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 21:50:02.181730  357212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 21:50:02.181784  357212 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 21:50:02.181879  357212 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 21:50:02.181995  357212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 21:50:02.182080  357212 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 21:50:02.182147  357212 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 21:50:02.182179  357212 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 21:50:02.182319  357212 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 21:50:02.182434  357212 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 21:50:02.182489  357212 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001299772s
	I1027 21:50:02.182565  357212 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 21:50:02.182662  357212 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.175:8443/livez
	I1027 21:50:02.182791  357212 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 21:50:02.182921  357212 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 21:50:02.183005  357212 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.98830856s
	I1027 21:50:02.183079  357212 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.717409784s
	I1027 21:50:02.183134  357212 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003781996s
	I1027 21:50:02.183221  357212 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 21:50:02.183320  357212 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 21:50:02.183381  357212 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 21:50:02.183548  357212 kubeadm.go:319] [mark-control-plane] Marking the node addons-865238 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 21:50:02.183616  357212 kubeadm.go:319] [bootstrap-token] Using token: qvqnxk.m2cpmj494cd5zga2
	I1027 21:50:02.185472  357212 out.go:252]   - Configuring RBAC rules ...
	I1027 21:50:02.185635  357212 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 21:50:02.185742  357212 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 21:50:02.185944  357212 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 21:50:02.186159  357212 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 21:50:02.186338  357212 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 21:50:02.186444  357212 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 21:50:02.186629  357212 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 21:50:02.186673  357212 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 21:50:02.186741  357212 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 21:50:02.186751  357212 kubeadm.go:319] 
	I1027 21:50:02.186815  357212 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 21:50:02.186823  357212 kubeadm.go:319] 
	I1027 21:50:02.186904  357212 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 21:50:02.186927  357212 kubeadm.go:319] 
	I1027 21:50:02.186950  357212 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 21:50:02.187009  357212 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 21:50:02.187056  357212 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 21:50:02.187062  357212 kubeadm.go:319] 
	I1027 21:50:02.187111  357212 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 21:50:02.187121  357212 kubeadm.go:319] 
	I1027 21:50:02.187161  357212 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 21:50:02.187167  357212 kubeadm.go:319] 
	I1027 21:50:02.187228  357212 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 21:50:02.187348  357212 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 21:50:02.187459  357212 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 21:50:02.187470  357212 kubeadm.go:319] 
	I1027 21:50:02.187578  357212 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 21:50:02.187642  357212 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 21:50:02.187648  357212 kubeadm.go:319] 
	I1027 21:50:02.187713  357212 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qvqnxk.m2cpmj494cd5zga2 \
	I1027 21:50:02.187807  357212 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c \
	I1027 21:50:02.187834  357212 kubeadm.go:319] 	--control-plane 
	I1027 21:50:02.187838  357212 kubeadm.go:319] 
	I1027 21:50:02.187915  357212 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 21:50:02.187922  357212 kubeadm.go:319] 
	I1027 21:50:02.188004  357212 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qvqnxk.m2cpmj494cd5zga2 \
	I1027 21:50:02.188152  357212 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c 
	I1027 21:50:02.188174  357212 cni.go:84] Creating CNI manager for ""
	I1027 21:50:02.188185  357212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 21:50:02.190554  357212 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 21:50:02.191883  357212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 21:50:02.210764  357212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 21:50:02.239272  357212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 21:50:02.239364  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:02.239410  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-865238 minikube.k8s.io/updated_at=2025_10_27T21_50_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=addons-865238 minikube.k8s.io/primary=true
	I1027 21:50:02.303035  357212 ops.go:34] apiserver oom_adj: -16
	I1027 21:50:02.416518  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:02.916878  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:03.416995  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:03.916848  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:04.417062  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:04.917184  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:05.417281  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:05.917664  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:06.417542  357212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:50:06.532762  357212 kubeadm.go:1114] duration metric: took 4.293456476s to wait for elevateKubeSystemPrivileges
	I1027 21:50:06.532822  357212 kubeadm.go:403] duration metric: took 17.966078512s to StartCluster
	I1027 21:50:06.532849  357212 settings.go:142] acquiring lock: {Name:mk9b0cd8ae1e83c76c2473e7845967d905910c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:50:06.533116  357212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 21:50:06.533824  357212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:50:06.534155  357212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 21:50:06.534185  357212 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:50:06.534252  357212 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 21:50:06.534397  357212 addons.go:69] Setting yakd=true in profile "addons-865238"
	I1027 21:50:06.534413  357212 config.go:182] Loaded profile config "addons-865238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:50:06.534426  357212 addons.go:69] Setting metrics-server=true in profile "addons-865238"
	I1027 21:50:06.534445  357212 addons.go:238] Setting addon metrics-server=true in "addons-865238"
	I1027 21:50:06.534421  357212 addons.go:238] Setting addon yakd=true in "addons-865238"
	I1027 21:50:06.534452  357212 addons.go:69] Setting inspektor-gadget=true in profile "addons-865238"
	I1027 21:50:06.534485  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.534499  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.534505  357212 addons.go:238] Setting addon inspektor-gadget=true in "addons-865238"
	I1027 21:50:06.534526  357212 addons.go:69] Setting default-storageclass=true in profile "addons-865238"
	I1027 21:50:06.534562  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.534586  357212 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-865238"
	I1027 21:50:06.534594  357212 addons.go:69] Setting ingress=true in profile "addons-865238"
	I1027 21:50:06.534607  357212 addons.go:69] Setting cloud-spanner=true in profile "addons-865238"
	I1027 21:50:06.534660  357212 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-865238"
	I1027 21:50:06.534664  357212 addons.go:238] Setting addon ingress=true in "addons-865238"
	I1027 21:50:06.534674  357212 addons.go:238] Setting addon cloud-spanner=true in "addons-865238"
	I1027 21:50:06.534683  357212 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-865238"
	I1027 21:50:06.534702  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.534727  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.534729  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.535522  357212 addons.go:69] Setting gcp-auth=true in profile "addons-865238"
	I1027 21:50:06.535545  357212 addons.go:69] Setting registry=true in profile "addons-865238"
	I1027 21:50:06.535557  357212 mustload.go:66] Loading cluster: addons-865238
	I1027 21:50:06.535565  357212 addons.go:238] Setting addon registry=true in "addons-865238"
	I1027 21:50:06.535590  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.535776  357212 config.go:182] Loaded profile config "addons-865238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:50:06.536033  357212 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-865238"
	I1027 21:50:06.536054  357212 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-865238"
	I1027 21:50:06.536162  357212 addons.go:69] Setting ingress-dns=true in profile "addons-865238"
	I1027 21:50:06.536179  357212 addons.go:238] Setting addon ingress-dns=true in "addons-865238"
	I1027 21:50:06.536220  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.536447  357212 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-865238"
	I1027 21:50:06.536455  357212 addons.go:69] Setting volcano=true in profile "addons-865238"
	I1027 21:50:06.536508  357212 addons.go:238] Setting addon volcano=true in "addons-865238"
	I1027 21:50:06.536531  357212 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-865238"
	I1027 21:50:06.536548  357212 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-865238"
	I1027 21:50:06.536562  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.536577  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.536591  357212 out.go:179] * Verifying Kubernetes components...
	I1027 21:50:06.536724  357212 addons.go:69] Setting volumesnapshots=true in profile "addons-865238"
	I1027 21:50:06.536747  357212 addons.go:238] Setting addon volumesnapshots=true in "addons-865238"
	I1027 21:50:06.536775  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.536785  357212 addons.go:69] Setting registry-creds=true in profile "addons-865238"
	I1027 21:50:06.536804  357212 addons.go:238] Setting addon registry-creds=true in "addons-865238"
	I1027 21:50:06.536828  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.536521  357212 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-865238"
	I1027 21:50:06.537756  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.537708  357212 addons.go:69] Setting storage-provisioner=true in profile "addons-865238"
	I1027 21:50:06.537841  357212 addons.go:238] Setting addon storage-provisioner=true in "addons-865238"
	I1027 21:50:06.537869  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.538369  357212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:50:06.543480  357212 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 21:50:06.543498  357212 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 21:50:06.543513  357212 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1027 21:50:06.543556  357212 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:50:06.543679  357212 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 21:50:06.544086  357212 addons.go:238] Setting addon default-storageclass=true in "addons-865238"
	I1027 21:50:06.544135  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.544179  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:06.545109  357212 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 21:50:06.545156  357212 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 21:50:06.545169  357212 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 21:50:06.545248  357212 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:50:06.545263  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 21:50:06.545312  357212 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 21:50:06.545322  357212 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 21:50:06.545468  357212 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 21:50:06.545484  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 21:50:06.545737  357212 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-865238"
	I1027 21:50:06.545791  357212 host.go:66] Checking if "addons-865238" exists ...
	W1027 21:50:06.546388  357212 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 21:50:06.546780  357212 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 21:50:06.546818  357212 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 21:50:06.547376  357212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 21:50:06.547797  357212 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 21:50:06.547798  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 21:50:06.547849  357212 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 21:50:06.547798  357212 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:50:06.549047  357212 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 21:50:06.549067  357212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 21:50:06.549402  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 21:50:06.549462  357212 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:50:06.549469  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 21:50:06.549477  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 21:50:06.549483  357212 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 21:50:06.549497  357212 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:50:06.549506  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 21:50:06.549423  357212 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 21:50:06.549404  357212 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 21:50:06.549729  357212 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 21:50:06.551571  357212 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 21:50:06.551626  357212 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 21:50:06.551643  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 21:50:06.551649  357212 out.go:179]   - Using image docker.io/busybox:stable
	I1027 21:50:06.551733  357212 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:50:06.551762  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 21:50:06.551737  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 21:50:06.551801  357212 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:50:06.552381  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 21:50:06.553014  357212 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:50:06.553033  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 21:50:06.554937  357212 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 21:50:06.554958  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 21:50:06.556380  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.556990  357212 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:50:06.557012  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 21:50:06.557304  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.558049  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.558171  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 21:50:06.560676  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 21:50:06.560730  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.560768  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.560696  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.560674  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.560997  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.561017  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.561742  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.561901  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.562028  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.562200  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.562860  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.563034  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.563093  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.563123  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.563164  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 21:50:06.563209  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.563237  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.563751  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.563825  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.564182  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.564522  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.564580  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.564758  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.564938  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.565205  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.565255  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.565384  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.566044  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.566072  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 21:50:06.566076  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.566074  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.566264  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.566294  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.566390  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.566507  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.566618  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.566841  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.566860  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.567247  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.567263  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.567323  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.567373  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.567328  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.567405  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.567457  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.567566  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.567833  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.568233  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.568577  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.568604  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.568653  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.568845  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.568874  357212 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 21:50:06.569338  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.569373  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.569572  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:06.570104  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 21:50:06.570121  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 21:50:06.572838  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.573238  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:06.573262  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:06.573450  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	W1027 21:50:06.873373  357212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47600->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:06.873438  357212 retry.go:31] will retry after 136.612876ms: ssh: handshake failed: read tcp 192.168.39.1:47600->192.168.39.175:22: read: connection reset by peer
	W1027 21:50:06.880068  357212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47606->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:06.880107  357212 retry.go:31] will retry after 152.173486ms: ssh: handshake failed: read tcp 192.168.39.1:47606->192.168.39.175:22: read: connection reset by peer
	W1027 21:50:06.882618  357212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47620->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:06.882650  357212 retry.go:31] will retry after 323.16322ms: ssh: handshake failed: read tcp 192.168.39.1:47620->192.168.39.175:22: read: connection reset by peer
	W1027 21:50:07.014656  357212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47634->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:07.014706  357212 retry.go:31] will retry after 229.545036ms: ssh: handshake failed: read tcp 192.168.39.1:47634->192.168.39.175:22: read: connection reset by peer
	W1027 21:50:07.034727  357212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47638->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:07.034766  357212 retry.go:31] will retry after 189.594225ms: ssh: handshake failed: read tcp 192.168.39.1:47638->192.168.39.175:22: read: connection reset by peer
	I1027 21:50:07.519406  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:50:07.666869  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 21:50:07.673004  357212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 21:50:07.673028  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 21:50:07.694270  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:50:07.723326  357212 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 21:50:07.723368  357212 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 21:50:07.737667  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:50:07.738046  357212 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:07.738068  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 21:50:07.758450  357212 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 21:50:07.758487  357212 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 21:50:07.776959  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:50:07.836713  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 21:50:07.987471  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:50:08.061139  357212 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 21:50:08.061174  357212 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 21:50:08.111246  357212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 21:50:08.111281  357212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 21:50:08.211720  357212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.677527377s)
	I1027 21:50:08.211794  357212 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.673389997s)
	I1027 21:50:08.211913  357212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:50:08.211955  357212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 21:50:08.400741  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:50:08.498931  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:50:08.505993  357212 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 21:50:08.506025  357212 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 21:50:08.589769  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 21:50:08.589814  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 21:50:08.598797  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:08.613419  357212 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 21:50:08.613459  357212 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 21:50:08.835317  357212 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:50:08.835354  357212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 21:50:08.892439  357212 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:50:08.892465  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 21:50:09.008856  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 21:50:09.008912  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 21:50:09.016948  357212 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 21:50:09.016981  357212 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 21:50:09.129274  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:50:09.178335  357212 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 21:50:09.178377  357212 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 21:50:09.297000  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:50:09.476290  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 21:50:09.476330  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 21:50:09.483164  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 21:50:09.483204  357212 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 21:50:09.559750  357212 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:50:09.559780  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 21:50:09.923471  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 21:50:09.923503  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 21:50:09.928602  357212 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:50:09.928625  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 21:50:10.026332  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:50:10.319470  357212 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 21:50:10.319515  357212 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 21:50:10.348231  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:50:10.442877  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.923416393s)
	I1027 21:50:10.442922  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.775989209s)
	I1027 21:50:10.442981  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.748669268s)
	I1027 21:50:10.468206  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 21:50:10.468242  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 21:50:11.124852  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 21:50:11.124885  357212 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 21:50:11.583058  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 21:50:11.583085  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 21:50:12.150477  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 21:50:12.150508  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 21:50:12.540450  357212 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:50:12.540485  357212 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 21:50:12.893375  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:50:13.212677  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.474958149s)
	I1027 21:50:13.691590  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.854839728s)
	I1027 21:50:13.691645  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.70414822s)
	I1027 21:50:13.691590  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.914543289s)
	I1027 21:50:13.691704  357212 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.479772381s)
	I1027 21:50:13.691778  357212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.479797002s)
	I1027 21:50:13.691809  357212 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1027 21:50:13.692466  357212 node_ready.go:35] waiting up to 6m0s for node "addons-865238" to be "Ready" ...
	I1027 21:50:13.708875  357212 node_ready.go:49] node "addons-865238" is "Ready"
	I1027 21:50:13.708955  357212 node_ready.go:38] duration metric: took 16.455073ms for node "addons-865238" to be "Ready" ...
	I1027 21:50:13.708977  357212 api_server.go:52] waiting for apiserver process to appear ...
	I1027 21:50:13.709056  357212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 21:50:13.978580  357212 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 21:50:13.982350  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:13.983038  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:13.983082  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:13.983302  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:14.204058  357212 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-865238" context rescaled to 1 replicas
	I1027 21:50:15.043999  357212 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 21:50:15.356311  357212 addons.go:238] Setting addon gcp-auth=true in "addons-865238"
	I1027 21:50:15.356393  357212 host.go:66] Checking if "addons-865238" exists ...
	I1027 21:50:15.358435  357212 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 21:50:15.360931  357212 main.go:143] libmachine: domain addons-865238 has defined MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:15.361451  357212 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:65:7c", ip: ""} in network mk-addons-865238: {Iface:virbr1 ExpiryTime:2025-10-27 22:49:37 +0000 UTC Type:0 Mac:52:54:00:60:65:7c Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-865238 Clientid:01:52:54:00:60:65:7c}
	I1027 21:50:15.361480  357212 main.go:143] libmachine: domain addons-865238 has defined IP address 192.168.39.175 and MAC address 52:54:00:60:65:7c in network mk-addons-865238
	I1027 21:50:15.361643  357212 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/addons-865238/id_rsa Username:docker}
	I1027 21:50:17.618768  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.217974876s)
	I1027 21:50:17.618826  357212 addons.go:479] Verifying addon ingress=true in "addons-865238"
	I1027 21:50:17.618864  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.119882801s)
	I1027 21:50:17.618980  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.020143975s)
	W1027 21:50:17.619047  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:17.619071  357212 retry.go:31] will retry after 288.090303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:17.619192  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.489881824s)
	I1027 21:50:17.619222  357212 addons.go:479] Verifying addon metrics-server=true in "addons-865238"
	I1027 21:50:17.619272  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.322229002s)
	I1027 21:50:17.619291  357212 addons.go:479] Verifying addon registry=true in "addons-865238"
	I1027 21:50:17.619359  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.592972491s)
	I1027 21:50:17.619450  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.271173855s)
	W1027 21:50:17.619482  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:50:17.619506  357212 retry.go:31] will retry after 149.728437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:50:17.621170  357212 out.go:179] * Verifying ingress addon...
	I1027 21:50:17.622232  357212 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-865238 service yakd-dashboard -n yakd-dashboard
	
	I1027 21:50:17.622241  357212 out.go:179] * Verifying registry addon...
	I1027 21:50:17.624217  357212 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 21:50:17.625712  357212 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 21:50:17.697240  357212 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 21:50:17.697269  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:17.700170  357212 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 21:50:17.700197  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:17.769867  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:50:17.907946  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:18.165204  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:18.166716  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:18.726699  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:18.726758  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:18.877504  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.984057608s)
	I1027 21:50:18.877531  357212 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.168445906s)
	I1027 21:50:18.877569  357212 api_server.go:72] duration metric: took 12.343346256s to wait for apiserver process to appear ...
	I1027 21:50:18.877604  357212 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-865238"
	I1027 21:50:18.877608  357212 api_server.go:88] waiting for apiserver healthz status ...
	I1027 21:50:18.877616  357212 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.51915341s)
	I1027 21:50:18.877642  357212 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1027 21:50:18.879916  357212 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 21:50:18.879940  357212 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:50:18.881462  357212 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 21:50:18.881938  357212 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 21:50:18.882962  357212 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 21:50:18.882989  357212 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 21:50:18.928004  357212 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 21:50:18.928043  357212 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 21:50:18.950832  357212 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1027 21:50:18.953908  357212 api_server.go:141] control plane version: v1.34.1
	I1027 21:50:18.953939  357212 api_server.go:131] duration metric: took 76.317165ms to wait for apiserver health ...
	I1027 21:50:18.953949  357212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 21:50:18.996640  357212 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 21:50:18.996669  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:19.006799  357212 system_pods.go:59] 20 kube-system pods found
	I1027 21:50:19.006866  357212 system_pods.go:61] "amd-gpu-device-plugin-zwgdd" [3e60fd84-e823-4898-9d5f-51c25e535361] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:50:19.006899  357212 system_pods.go:61] "coredns-66bc5c9577-68w47" [4d7175f5-e708-447a-af89-f17a2745c753] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 21:50:19.006914  357212 system_pods.go:61] "coredns-66bc5c9577-p4vn5" [6f020418-bbd8-4b55-9e18-e488cd5f81f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 21:50:19.006963  357212 system_pods.go:61] "csi-hostpath-attacher-0" [a869a29d-9e68-482c-bc7a-3711937e761b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:50:19.006969  357212 system_pods.go:61] "csi-hostpath-resizer-0" [30b1547a-fba2-440a-9e85-323e6ac9e0a9] Pending
	I1027 21:50:19.006978  357212 system_pods.go:61] "csi-hostpathplugin-v22qx" [8dcd70af-5f85-4ced-b998-391040fb2cdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:50:19.006987  357212 system_pods.go:61] "etcd-addons-865238" [4e051238-1955-4813-9e5d-3c42b845c55f] Running
	I1027 21:50:19.006994  357212 system_pods.go:61] "kube-apiserver-addons-865238" [af2172ba-ac1a-4940-814f-4a2d17e1135e] Running
	I1027 21:50:19.007003  357212 system_pods.go:61] "kube-controller-manager-addons-865238" [ae64e951-4627-41f0-8a4e-9aa4430b3686] Running
	I1027 21:50:19.007011  357212 system_pods.go:61] "kube-ingress-dns-minikube" [3f85e553-b909-45c3-a6a4-67606540769d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:50:19.007020  357212 system_pods.go:61] "kube-proxy-7z9xg" [15c180ba-494d-4c35-af5c-b5239408cd66] Running
	I1027 21:50:19.007026  357212 system_pods.go:61] "kube-scheduler-addons-865238" [ec29b879-82d3-4ac5-aa51-c8af3915feeb] Running
	I1027 21:50:19.007036  357212 system_pods.go:61] "metrics-server-85b7d694d7-dsd4x" [16454441-4f3c-4401-b6ca-c56647697e9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:50:19.007045  357212 system_pods.go:61] "nvidia-device-plugin-daemonset-xdn5t" [2f3c3b15-8971-406b-99c3-881d986c3fa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:50:19.007059  357212 system_pods.go:61] "registry-6b586f9694-9j6vm" [9ee1777e-23f5-4221-b374-9a1234ea50f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:50:19.007067  357212 system_pods.go:61] "registry-creds-764b6fb674-jpqnm" [0066aebe-efcf-47e0-b1e7-b86db905c6fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:50:19.007075  357212 system_pods.go:61] "registry-proxy-vv9hz" [e0b028ef-aa0b-4b73-ac69-1e31aeb5123a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:50:19.007083  357212 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gzcwf" [53bb36c9-9685-4918-bfe1-51d90c64254a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:50:19.007095  357212 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zl8fr" [b9734093-d037-497f-9b32-0e586c7ffdef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:50:19.007105  357212 system_pods.go:61] "storage-provisioner" [7142b212-9d5b-41df-b0fc-2e0642bd74eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 21:50:19.007113  357212 system_pods.go:74] duration metric: took 53.157696ms to wait for pod list to return data ...
	I1027 21:50:19.007130  357212 default_sa.go:34] waiting for default service account to be created ...
	I1027 21:50:19.024093  357212 default_sa.go:45] found service account: "default"
	I1027 21:50:19.024131  357212 default_sa.go:55] duration metric: took 16.992313ms for default service account to be created ...
	I1027 21:50:19.024142  357212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 21:50:19.051429  357212 system_pods.go:86] 20 kube-system pods found
	I1027 21:50:19.051474  357212 system_pods.go:89] "amd-gpu-device-plugin-zwgdd" [3e60fd84-e823-4898-9d5f-51c25e535361] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:50:19.051486  357212 system_pods.go:89] "coredns-66bc5c9577-68w47" [4d7175f5-e708-447a-af89-f17a2745c753] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 21:50:19.051498  357212 system_pods.go:89] "coredns-66bc5c9577-p4vn5" [6f020418-bbd8-4b55-9e18-e488cd5f81f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 21:50:19.051509  357212 system_pods.go:89] "csi-hostpath-attacher-0" [a869a29d-9e68-482c-bc7a-3711937e761b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:50:19.051514  357212 system_pods.go:89] "csi-hostpath-resizer-0" [30b1547a-fba2-440a-9e85-323e6ac9e0a9] Pending
	I1027 21:50:19.051522  357212 system_pods.go:89] "csi-hostpathplugin-v22qx" [8dcd70af-5f85-4ced-b998-391040fb2cdf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:50:19.051531  357212 system_pods.go:89] "etcd-addons-865238" [4e051238-1955-4813-9e5d-3c42b845c55f] Running
	I1027 21:50:19.051537  357212 system_pods.go:89] "kube-apiserver-addons-865238" [af2172ba-ac1a-4940-814f-4a2d17e1135e] Running
	I1027 21:50:19.051542  357212 system_pods.go:89] "kube-controller-manager-addons-865238" [ae64e951-4627-41f0-8a4e-9aa4430b3686] Running
	I1027 21:50:19.051560  357212 system_pods.go:89] "kube-ingress-dns-minikube" [3f85e553-b909-45c3-a6a4-67606540769d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:50:19.051565  357212 system_pods.go:89] "kube-proxy-7z9xg" [15c180ba-494d-4c35-af5c-b5239408cd66] Running
	I1027 21:50:19.051571  357212 system_pods.go:89] "kube-scheduler-addons-865238" [ec29b879-82d3-4ac5-aa51-c8af3915feeb] Running
	I1027 21:50:19.051579  357212 system_pods.go:89] "metrics-server-85b7d694d7-dsd4x" [16454441-4f3c-4401-b6ca-c56647697e9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:50:19.051591  357212 system_pods.go:89] "nvidia-device-plugin-daemonset-xdn5t" [2f3c3b15-8971-406b-99c3-881d986c3fa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:50:19.051605  357212 system_pods.go:89] "registry-6b586f9694-9j6vm" [9ee1777e-23f5-4221-b374-9a1234ea50f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:50:19.051616  357212 system_pods.go:89] "registry-creds-764b6fb674-jpqnm" [0066aebe-efcf-47e0-b1e7-b86db905c6fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:50:19.051627  357212 system_pods.go:89] "registry-proxy-vv9hz" [e0b028ef-aa0b-4b73-ac69-1e31aeb5123a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:50:19.051639  357212 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gzcwf" [53bb36c9-9685-4918-bfe1-51d90c64254a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:50:19.051648  357212 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl8fr" [b9734093-d037-497f-9b32-0e586c7ffdef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:50:19.051659  357212 system_pods.go:89] "storage-provisioner" [7142b212-9d5b-41df-b0fc-2e0642bd74eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 21:50:19.051670  357212 system_pods.go:126] duration metric: took 27.520237ms to wait for k8s-apps to be running ...
	I1027 21:50:19.051685  357212 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 21:50:19.051753  357212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 21:50:19.137967  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:19.139525  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:19.286070  357212 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:50:19.286106  357212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 21:50:19.391717  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:19.462623  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:50:19.651603  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:19.655275  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:19.890595  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:20.134985  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:20.136090  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:20.395818  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:20.636655  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:20.642602  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:20.894854  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:21.140267  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:21.143182  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:21.335066  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.565118661s)
	I1027 21:50:21.393442  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:21.633742  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:21.652539  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:21.745965  357212 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.694176261s)
	I1027 21:50:21.746018  357212 system_svc.go:56] duration metric: took 2.694328176s WaitForService to wait for kubelet
	I1027 21:50:21.746030  357212 kubeadm.go:587] duration metric: took 15.211808209s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:50:21.746051  357212 node_conditions.go:102] verifying NodePressure condition ...
	I1027 21:50:21.746282  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.838288164s)
	W1027 21:50:21.746352  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:21.746381  357212 retry.go:31] will retry after 489.138427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:21.763801  357212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 21:50:21.763836  357212 node_conditions.go:123] node cpu capacity is 2
	I1027 21:50:21.763849  357212 node_conditions.go:105] duration metric: took 17.792933ms to run NodePressure ...
	I1027 21:50:21.763864  357212 start.go:242] waiting for startup goroutines ...
	I1027 21:50:21.948606  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:22.032474  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.569795902s)
	I1027 21:50:22.033565  357212 addons.go:479] Verifying addon gcp-auth=true in "addons-865238"
	I1027 21:50:22.035563  357212 out.go:179] * Verifying gcp-auth addon...
	I1027 21:50:22.037925  357212 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 21:50:22.076631  357212 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 21:50:22.076660  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:22.162594  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:22.162671  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:22.235809  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:22.391376  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:22.544228  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:22.629557  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:22.633993  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:22.889684  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:23.047540  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:23.136087  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:23.139682  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:23.391377  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:23.547644  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:23.633780  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:23.643702  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:23.891548  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:23.941470  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.705603612s)
	W1027 21:50:23.941537  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:23.941584  357212 retry.go:31] will retry after 480.486765ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:24.045133  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:24.134832  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:24.138476  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:24.390645  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:24.422834  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:24.543233  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:24.630477  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:24.634810  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:24.893283  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:25.043432  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:25.130093  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:25.137477  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:25.391564  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:25.548220  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:25.575241  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.152351803s)
	W1027 21:50:25.575291  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:25.575321  357212 retry.go:31] will retry after 713.809254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:25.638276  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:25.640424  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:25.886958  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:26.044269  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:26.127929  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:26.131505  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:26.289709  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:26.386596  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:26.545560  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:26.629182  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:26.630357  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:26.889379  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:27.044411  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:27.131133  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:27.131528  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:27.389284  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:27.504804  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.215040644s)
	W1027 21:50:27.504855  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:27.504879  357212 retry.go:31] will retry after 787.080135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:27.542304  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:27.631724  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:27.634616  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:27.888880  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:28.042066  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:28.131167  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:28.131662  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:28.293124  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:28.391149  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:28.545338  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:28.633015  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:28.633103  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:28.893223  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:29.044206  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:29.132729  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:29.134667  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:29.385467  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:29.431974  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.13879622s)
	W1027 21:50:29.432025  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:29.432054  357212 retry.go:31] will retry after 1.446169933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:29.543920  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:29.629802  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:29.630962  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:29.891959  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:30.044742  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:30.129868  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:30.133389  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:30.392620  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:30.542401  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:30.630571  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:30.631279  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:30.878647  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:30.889051  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:31.047568  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:31.129272  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:31.132545  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:31.389104  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:31.544401  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:31.635794  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:31.637080  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:31.890878  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:31.926484  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.047773284s)
	W1027 21:50:31.926530  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:31.926556  357212 retry.go:31] will retry after 4.145443389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:32.042500  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:32.131252  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:32.131399  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:32.389812  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:32.545569  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:32.631299  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:32.633810  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:32.891784  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:33.045751  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:33.130246  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:33.132266  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:33.388033  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:33.542644  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:33.638467  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:33.640367  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:33.890877  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:34.043632  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:34.128268  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:34.131657  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:34.387758  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:34.542848  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:34.629741  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:34.630721  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:34.888034  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:35.046018  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:35.129356  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:35.129945  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:35.390479  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:35.541711  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:35.634191  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:35.635812  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:35.888254  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:36.041232  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:36.072278  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:36.133991  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:36.134898  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:36.391615  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:36.542085  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:36.628539  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:36.635109  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:36.888027  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:37.042513  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:37.134968  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:37.135780  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:37.390086  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:37.494391  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422054453s)
	W1027 21:50:37.494468  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:37.494496  357212 retry.go:31] will retry after 4.73279209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:37.547997  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:37.628923  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:37.631803  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:37.893312  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:38.042582  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:38.134242  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:38.135815  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:38.390262  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:38.542591  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:38.633419  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:38.638476  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:38.892044  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:39.047696  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:39.130593  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:39.132254  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:39.386352  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:39.544911  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:39.701583  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:39.707806  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:39.889828  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:40.043309  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:40.131756  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:40.132993  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:40.389358  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:40.678064  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:40.678127  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:40.678337  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:40.886833  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:41.042127  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:41.134209  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:41.139866  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:41.388816  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:41.543758  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:41.628495  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:41.630747  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:41.888026  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:42.045275  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:42.131017  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:42.134698  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:42.227961  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:42.387651  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:42.830509  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:42.830638  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:42.832610  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:42.927138  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:43.044683  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:43.132829  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:43.134514  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:43.366619  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.138601835s)
	W1027 21:50:43.366683  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:43.366713  357212 retry.go:31] will retry after 8.647316748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:43.389213  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:43.546547  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:43.631993  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:43.637409  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:43.889058  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:44.041922  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:44.129260  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:44.129710  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:44.390389  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:44.542090  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:44.631470  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:44.632119  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:44.893798  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:45.042773  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:45.130879  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:45.132609  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:45.389623  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:45.544931  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:45.634073  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:45.634099  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:45.886322  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:46.044593  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:46.129524  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:46.136528  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:46.392283  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:46.542477  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:46.628092  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:46.629044  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:46.890032  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:47.041136  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:47.129272  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:47.131561  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:47.387725  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:47.543505  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:47.632291  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:47.632525  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:47.886232  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:48.043925  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:48.142304  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:48.142863  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:48.386016  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:48.542284  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:48.636054  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:48.636133  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:48.909423  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:49.052310  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:49.148388  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:49.150610  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:49.388465  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:49.543121  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:49.634260  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:49.646952  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:49.888396  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:50.045061  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:50.134993  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:50.135657  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:50.387035  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:50.546486  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:50.630616  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:50.634146  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:50.888318  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:51.138580  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:51.145117  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:51.147966  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:51.387868  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:51.545323  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:51.632056  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:51.636031  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:51.887439  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:52.014635  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:50:52.046428  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:52.136377  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:52.136540  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:52.392429  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:52.543413  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:52.633650  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:52.634439  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:52.888056  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 21:50:52.911429  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:52.911465  357212 retry.go:31] will retry after 8.594389292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:50:53.043311  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:53.128395  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:53.131669  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:53.388613  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:53.544148  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:53.637458  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:53.639883  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:53.892030  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:54.046802  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:54.131700  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:54.133438  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:54.389200  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:54.541746  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:54.634752  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:54.635580  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:54.890227  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:55.043094  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:55.129260  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:55.131745  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:55.387195  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:55.550349  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:55.630213  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:55.630743  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:55.890132  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:56.045427  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:56.130486  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:56.131540  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:56.387435  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:56.543016  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:56.630067  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:56.630112  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:56.887470  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:57.042052  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:57.129722  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:57.132345  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:57.389704  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:57.542248  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:57.637850  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:57.637991  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:57.886739  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:58.042415  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:58.129078  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:58.130927  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:58.386592  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:58.542826  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:58.630731  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:58.633116  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:58.889253  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:59.041495  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:59.129148  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:59.130251  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:59.387321  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:50:59.555341  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:50:59.661348  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:50:59.661444  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:50:59.888278  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:00.042792  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:00.129334  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:00.130883  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:00.388253  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:00.543207  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:00.629390  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:00.631453  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:00.887843  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:01.045769  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:01.132757  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:01.132821  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:01.390700  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:01.506879  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:51:01.543532  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:01.634357  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:01.636723  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:01.890329  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:02.046031  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:02.144810  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:02.145800  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:02.389122  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:02.543251  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:02.628778  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:02.631683  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:02.813398  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306451675s)
	W1027 21:51:02.813453  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:51:02.813477  357212 retry.go:31] will retry after 20.01913733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:51:02.896924  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:03.045123  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:03.139417  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:03.143238  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:03.390474  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:03.543664  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:03.629398  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:03.636316  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:03.890571  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:04.045580  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:04.130612  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:04.133243  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:04.387077  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:04.544677  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:04.635844  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:04.638247  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:04.893172  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:05.042499  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:05.132428  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:05.133545  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:05.388617  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:05.549475  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:05.631664  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:05.639214  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:51:05.888258  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:06.044053  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:06.133272  357212 kapi.go:107] duration metric: took 48.507553326s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 21:51:06.138956  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:06.393918  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:06.543371  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:06.628978  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:06.890519  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:07.042817  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:07.129955  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:07.389383  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:07.550683  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:07.651031  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:07.893843  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:08.047952  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:08.129840  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:08.387012  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:08.543084  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:08.629859  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:08.891188  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:09.042771  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:09.128311  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:09.389626  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:09.547975  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:09.633405  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:09.893507  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:10.052347  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:10.130344  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:10.388770  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:10.544320  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:10.646315  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:10.893240  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:11.045776  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:11.130489  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:11.387945  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:11.543122  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:11.630583  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:11.892257  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:12.044484  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:12.129502  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:12.387505  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:12.547731  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:12.630283  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:12.888366  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:13.042205  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:13.134258  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:13.597547  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:13.598029  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:13.694938  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:13.890873  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:14.059304  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:14.131372  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:14.386940  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:14.542557  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:14.630223  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:14.888776  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:15.043645  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:15.130793  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:15.392934  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:15.543397  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:15.646301  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:15.891777  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:16.044964  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:16.130673  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:16.390692  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:16.543314  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:16.633914  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:16.887085  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:17.041909  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:17.144640  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:17.387820  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:17.542359  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:17.629433  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:17.898171  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:18.042977  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:18.259275  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:18.388497  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:18.542970  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:18.629054  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:18.885270  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:19.041670  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:19.129134  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:19.388056  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:19.542289  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:19.631115  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:19.887089  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:20.045076  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:20.145053  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:20.387325  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:20.542480  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:20.634094  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:20.888256  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:21.043168  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:21.129331  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:21.386839  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:21.542746  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:21.629113  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:21.890556  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:22.046543  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:22.146459  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:22.392512  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:51:22.543374  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:22.646904  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:22.833025  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:51:22.896437  357212 kapi.go:107] duration metric: took 1m4.01449483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 21:51:23.047265  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:23.145026  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:23.554013  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:23.633878  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:24.065565  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:24.129639  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:24.548533  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:24.586748  357212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.753670049s)
	W1027 21:51:24.586809  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:51:24.586846  357212 retry.go:31] will retry after 30.908708086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:51:24.636967  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:25.041854  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:25.131323  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:25.546087  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:25.632710  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:26.053264  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:26.133477  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:26.543370  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:26.633428  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:27.043248  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:27.130452  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:27.542395  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:27.630764  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:28.044438  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:28.134586  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:28.545919  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:28.634299  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:29.042308  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:29.131777  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:29.543100  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:29.632744  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:30.048647  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:30.130246  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:30.544107  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:30.629661  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:31.043604  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:31.146302  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:31.546097  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:31.631037  357212 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:51:32.043621  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:32.355418  357212 kapi.go:107] duration metric: took 1m14.731197804s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 21:51:32.541797  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:33.041741  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:33.544830  357212 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:51:34.042456  357212 kapi.go:107] duration metric: took 1m12.004528563s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 21:51:34.044351  357212 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-865238 cluster.
	I1027 21:51:34.045531  357212 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 21:51:34.046757  357212 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 21:51:55.496066  357212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1027 21:51:56.288914  357212 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 21:51:56.289061  357212 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 21:51:56.291630  357212 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, default-storageclass, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1027 21:51:56.292994  357212 addons.go:514] duration metric: took 1m49.758734173s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin default-storageclass storage-provisioner cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1027 21:51:56.293062  357212 start.go:247] waiting for cluster config update ...
	I1027 21:51:56.293086  357212 start.go:256] writing updated cluster config ...
	I1027 21:51:56.293411  357212 ssh_runner.go:195] Run: rm -f paused
	I1027 21:51:56.299964  357212 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:51:56.305325  357212 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-68w47" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.313519  357212 pod_ready.go:94] pod "coredns-66bc5c9577-68w47" is "Ready"
	I1027 21:51:56.313551  357212 pod_ready.go:86] duration metric: took 8.193753ms for pod "coredns-66bc5c9577-68w47" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.316756  357212 pod_ready.go:83] waiting for pod "etcd-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.324771  357212 pod_ready.go:94] pod "etcd-addons-865238" is "Ready"
	I1027 21:51:56.324816  357212 pod_ready.go:86] duration metric: took 8.032128ms for pod "etcd-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.327941  357212 pod_ready.go:83] waiting for pod "kube-apiserver-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.334650  357212 pod_ready.go:94] pod "kube-apiserver-addons-865238" is "Ready"
	I1027 21:51:56.334688  357212 pod_ready.go:86] duration metric: took 6.71739ms for pod "kube-apiserver-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.337781  357212 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.706028  357212 pod_ready.go:94] pod "kube-controller-manager-addons-865238" is "Ready"
	I1027 21:51:56.706077  357212 pod_ready.go:86] duration metric: took 368.260575ms for pod "kube-controller-manager-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:56.905447  357212 pod_ready.go:83] waiting for pod "kube-proxy-7z9xg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:57.304605  357212 pod_ready.go:94] pod "kube-proxy-7z9xg" is "Ready"
	I1027 21:51:57.304649  357212 pod_ready.go:86] duration metric: took 399.167409ms for pod "kube-proxy-7z9xg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:57.504140  357212 pod_ready.go:83] waiting for pod "kube-scheduler-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:57.904963  357212 pod_ready.go:94] pod "kube-scheduler-addons-865238" is "Ready"
	I1027 21:51:57.905006  357212 pod_ready.go:86] duration metric: took 400.83105ms for pod "kube-scheduler-addons-865238" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:51:57.905024  357212 pod_ready.go:40] duration metric: took 1.605020438s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:51:57.951733  357212 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 21:51:57.953815  357212 out.go:179] * Done! kubectl is now configured to use "addons-865238" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.801082943Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.801128694Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.836251729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21edd6ee-beb7-45dd-9ab4-3fad6afd762c name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.836353995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21edd6ee-beb7-45dd-9ab4-3fad6afd762c name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.838066883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acd7a1ce-621f-4bce-a68e-e9c1e1d759e3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.839552045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761602107839523919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598771,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acd7a1ce-621f-4bce-a68e-e9c1e1d759e3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.840881849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=209a66e7-1f42-4625-96b0-e8ce39891927 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.841000413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=209a66e7-1f42-4625-96b0-e8ce39891927 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.841404991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df7d908c7934fb89c6efea84fffa60934cf2851590005e7be760d7e84c31c747,PodSandboxId:49eaef377dfe28b0629d2cd86488442aa4d382faf6a7d26be12f44a4666f0246,Metadata:&ContainerMetadata{Name:registry-creds,Attempt:0,},Image:&ImageSpec{Image:docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14,State:CONTAINER_RUNNING,CreatedAt:1761602062944357114,Labels:map[string]string{io.kubernetes.container.name: registry-creds,io.kubernetes.pod.name: registry-creds-764b6fb674-jpqnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0066aebe-efcf-47e0-b1e7-b86db905c6fa,},Annotations:map[string]string{io.kubernetes.container.hash: cbd9560c,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a74a5314baf7445da6f9842473ea38eff4b945436f8869fbc60fe8feceb5079,PodSandboxId:24ed64471618d820f7bc1c61eab4c7bf5deb2b82f23797487f118f3b2b4d71cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761601963639147810,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8e9adf6-9ebd-4271-b241-a112a7898205,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"cont
ainerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f9a33ba953b1bd67117fb73ee83165ea9478a96b5b1b5c42843a476129afa3,PodSandboxId:beec0a1558917665d12e61b086bca6f2d5f0e8953d54564ad7328195eaf7e3ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761601920432272100,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88657516-1699-4de3-80c1-13dffabfc378,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88efe19bb8368ec417b76990932b83673d0ffa7a5a00d39e752f5cd42affb6b6,PodSandboxId:05a0f2dd2d3135f7e6776542d9d3218ff5bb835851620a24908ec51dbaef51ee,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761601890848452335,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-v4xd6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8be4230-9e49-4563-ae7f-5446af4a2657,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4a491990b8610e3e56922ef81931d3899638564a6f14cde2b9cd2abbedbc8c90,PodSandboxId:3e79a834c3d17c76df6ed5bef2b88fd7d57c77d29ebb00ef4114328e2fcece47,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa9
3917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601882617402649,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pp5nk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70d7bfd4-fc74-4f1c-997c-f8a090a71b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f196b1418941bc7c05580d855dc48ff0a1625e1d754ece9e2a7e23938f5d16c,PodSandboxId:1eaaf81a0f4a3fc53d64dda4ca66a6a14445d411a8f18ec0ffd5bc6489a78c79,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601873811903134,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qg6jb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea477d37-e2ca-468e-a444-07fcd1d2b6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3f33f008af52f9926a72b6aef50f48c4dd2adcd16ca14c362c4c0a5b46dfb1,PodSandboxId:75a677d68b78ec81a04119266793498b924c39c451f4ff3017a835cce9f14f3e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761601857230318263,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhhrx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 419b3ce9-a1a3-4f6d-881d-b766bf86b6f9,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b062b3a01eb1068e1c00adcf5d5dcaef17094f5735eb337c5095b7b8297d5a9,PodSandboxId:02b51b944864ed77a4ea3ce65172b191c14625055f07e8b0d9959eba2267af06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0
cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761601844294795641,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f85e553-b909-45c3-a6a4-67606540769d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe72f6cade441dcfefd723121530ca56b6d795480b43ed7b20eac064465accd5,PodSandboxId:bfed73207b8dcc9e7603a361c2141a2738c786493ddf72f99d28f0a7132513e5,Me
tadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761601818030453830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7142b212-9d5b-41df-b0fc-2e0642bd74eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b347db803db76104f5b8e22b6f58b7d2c66f63dfd5e3566e6923baee32ebae,PodSandboxId:0b314c1d51fc2018b4b296f65dcd85d26a42bfe40d160a05f95f900ee008e151,Metadata:&Cont
ainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761601816411210436,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zwgdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e60fd84-e823-4898-9d5f-51c25e535361,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53fad5c69865baf869b3e31d3c2e70270d814d5ddeb04fab61b1f5cc55c4d62,PodSandboxId:992843860b59dec8cab959f94fa3e77386d2f09
c4c6a4ed7bb75a82f25b8154f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761601808256719158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-68w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7175f5-e708-447a-af89-f17a2745c753,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:256166291e6d99bae322cd4b5a441e93b35645f49374c60393928f0ee63252d6,PodSandboxId:e778278745b7acbadf2416ae57829308f898a04f518a860f62ba7b6d97a65c05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761601807215110107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7z9xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c180ba-494d-4c35-af5c-b5239408cd66,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5272762e20f7a319c335849c1f936bc5cddb087c3e7528705bfad3931973f,PodSandboxId:49b07f4622e3430c51ce0053178e044d946484a96bc80366264b1d4e3c500380,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761601795322113317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1782bc8a5930cfc5f91381a7b5ed0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.
ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca04a08197cbf500d76ab372ede4837038c7cdc0d0ecf2d131cd355bbfdb16a,PodSandboxId:1fe2d888c36b1c1b824bcee5d4ce8a651259c83ae8aa1773f087ea6df2d96bf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761601795331111577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512e9579c013c3f78de9a1
8183460d79,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c33bd4f5fe3881186ba321c711231ddc3b9a2c90625be996a7398b25a0dddc7,PodSandboxId:55a74b68518db50a5cff49e1c9a42f705d12d170520446d0a25a9a06a5fc964c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761601795308910315,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5c414c1431b3b7b03b86ef9da2c23f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47ef2929025d66c27a6165d3458d693292b4f9b0d8583bd9d27e8131714b9db,PodSandboxId:a7f282c54f8b222436f420aad47039d1ea23dd7501f113e9b641ad20284fc91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761601795290552395,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d343b52f2b769af26137db37052fa47f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=209a66e7-1f42-4625-96b0-e8ce39891927 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.881772918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=333fcca0-a8e4-49c8-b9a0-55503bdbb6b8 name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.881873518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=333fcca0-a8e4-49c8-b9a0-55503bdbb6b8 name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.883236412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee23b88a-3e4d-404e-a391-d1c593914007 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.884677680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761602107884576888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598771,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee23b88a-3e4d-404e-a391-d1c593914007 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.885678783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b86682b7-6194-49ac-9a69-f2d1d92ecfd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.885743955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b86682b7-6194-49ac-9a69-f2d1d92ecfd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.886080088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df7d908c7934fb89c6efea84fffa60934cf2851590005e7be760d7e84c31c747,PodSandboxId:49eaef377dfe28b0629d2cd86488442aa4d382faf6a7d26be12f44a4666f0246,Metadata:&ContainerMetadata{Name:registry-creds,Attempt:0,},Image:&ImageSpec{Image:docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14,State:CONTAINER_RUNNING,CreatedAt:1761602062944357114,Labels:map[string]string{io.kubernetes.container.name: registry-creds,io.kubernetes.pod.name: registry-creds-764b6fb674-jpqnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0066aebe-efcf-47e0-b1e7-b86db905c6fa,},Annotations:map[string]string{io.kubernetes.container.hash: cbd9560c,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a74a5314baf7445da6f9842473ea38eff4b945436f8869fbc60fe8feceb5079,PodSandboxId:24ed64471618d820f7bc1c61eab4c7bf5deb2b82f23797487f118f3b2b4d71cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761601963639147810,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8e9adf6-9ebd-4271-b241-a112a7898205,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"cont
ainerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f9a33ba953b1bd67117fb73ee83165ea9478a96b5b1b5c42843a476129afa3,PodSandboxId:beec0a1558917665d12e61b086bca6f2d5f0e8953d54564ad7328195eaf7e3ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761601920432272100,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88657516-1699-4de3-80c1-13dffabfc378,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88efe19bb8368ec417b76990932b83673d0ffa7a5a00d39e752f5cd42affb6b6,PodSandboxId:05a0f2dd2d3135f7e6776542d9d3218ff5bb835851620a24908ec51dbaef51ee,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761601890848452335,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-v4xd6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8be4230-9e49-4563-ae7f-5446af4a2657,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4a491990b8610e3e56922ef81931d3899638564a6f14cde2b9cd2abbedbc8c90,PodSandboxId:3e79a834c3d17c76df6ed5bef2b88fd7d57c77d29ebb00ef4114328e2fcece47,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa9
3917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601882617402649,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pp5nk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70d7bfd4-fc74-4f1c-997c-f8a090a71b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f196b1418941bc7c05580d855dc48ff0a1625e1d754ece9e2a7e23938f5d16c,PodSandboxId:1eaaf81a0f4a3fc53d64dda4ca66a6a14445d411a8f18ec0ffd5bc6489a78c79,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601873811903134,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qg6jb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea477d37-e2ca-468e-a444-07fcd1d2b6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3f33f008af52f9926a72b6aef50f48c4dd2adcd16ca14c362c4c0a5b46dfb1,PodSandboxId:75a677d68b78ec81a04119266793498b924c39c451f4ff3017a835cce9f14f3e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761601857230318263,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhhrx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 419b3ce9-a1a3-4f6d-881d-b766bf86b6f9,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b062b3a01eb1068e1c00adcf5d5dcaef17094f5735eb337c5095b7b8297d5a9,PodSandboxId:02b51b944864ed77a4ea3ce65172b191c14625055f07e8b0d9959eba2267af06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0
cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761601844294795641,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f85e553-b909-45c3-a6a4-67606540769d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe72f6cade441dcfefd723121530ca56b6d795480b43ed7b20eac064465accd5,PodSandboxId:bfed73207b8dcc9e7603a361c2141a2738c786493ddf72f99d28f0a7132513e5,Me
tadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761601818030453830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7142b212-9d5b-41df-b0fc-2e0642bd74eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b347db803db76104f5b8e22b6f58b7d2c66f63dfd5e3566e6923baee32ebae,PodSandboxId:0b314c1d51fc2018b4b296f65dcd85d26a42bfe40d160a05f95f900ee008e151,Metadata:&Cont
ainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761601816411210436,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zwgdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e60fd84-e823-4898-9d5f-51c25e535361,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53fad5c69865baf869b3e31d3c2e70270d814d5ddeb04fab61b1f5cc55c4d62,PodSandboxId:992843860b59dec8cab959f94fa3e77386d2f09
c4c6a4ed7bb75a82f25b8154f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761601808256719158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-68w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7175f5-e708-447a-af89-f17a2745c753,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:256166291e6d99bae322cd4b5a441e93b35645f49374c60393928f0ee63252d6,PodSandboxId:e778278745b7acbadf2416ae57829308f898a04f518a860f62ba7b6d97a65c05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761601807215110107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7z9xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c180ba-494d-4c35-af5c-b5239408cd66,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5272762e20f7a319c335849c1f936bc5cddb087c3e7528705bfad3931973f,PodSandboxId:49b07f4622e3430c51ce0053178e044d946484a96bc80366264b1d4e3c500380,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761601795322113317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1782bc8a5930cfc5f91381a7b5ed0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.
ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca04a08197cbf500d76ab372ede4837038c7cdc0d0ecf2d131cd355bbfdb16a,PodSandboxId:1fe2d888c36b1c1b824bcee5d4ce8a651259c83ae8aa1773f087ea6df2d96bf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761601795331111577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512e9579c013c3f78de9a1
8183460d79,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c33bd4f5fe3881186ba321c711231ddc3b9a2c90625be996a7398b25a0dddc7,PodSandboxId:55a74b68518db50a5cff49e1c9a42f705d12d170520446d0a25a9a06a5fc964c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761601795308910315,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5c414c1431b3b7b03b86ef9da2c23f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47ef2929025d66c27a6165d3458d693292b4f9b0d8583bd9d27e8131714b9db,PodSandboxId:a7f282c54f8b222436f420aad47039d1ea23dd7501f113e9b641ad20284fc91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761601795290552395,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d343b52f2b769af26137db37052fa47f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b86682b7-6194-49ac-9a69-f2d1d92ecfd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.896585797Z" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" file="docker/docker_client.go:901"
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.897140855Z" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Akicbase%2Fecho-server%3Apull&service=registry.docker.io" file="docker/docker_client.go:861"
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.934246969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd53be3f-54c3-4497-a9f4-bcaf0e223351 name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.934323970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd53be3f-54c3-4497-a9f4-bcaf0e223351 name=/runtime.v1.RuntimeService/Version
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.936481124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fdaf02b-fe05-41ae-b90b-749b74b920e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.937992038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761602107937964869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598771,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fdaf02b-fe05-41ae-b90b-749b74b920e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.938578284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20cc3507-cd85-4609-83a2-f500d560a8fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.938734830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20cc3507-cd85-4609-83a2-f500d560a8fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 21:55:07 addons-865238 crio[811]: time="2025-10-27 21:55:07.939078817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df7d908c7934fb89c6efea84fffa60934cf2851590005e7be760d7e84c31c747,PodSandboxId:49eaef377dfe28b0629d2cd86488442aa4d382faf6a7d26be12f44a4666f0246,Metadata:&ContainerMetadata{Name:registry-creds,Attempt:0,},Image:&ImageSpec{Image:docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14,State:CONTAINER_RUNNING,CreatedAt:1761602062944357114,Labels:map[string]string{io.kubernetes.container.name: registry-creds,io.kubernetes.pod.name: registry-creds-764b6fb674-jpqnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0066aebe-efcf-47e0-b1e7-b86db905c6fa,},Annotations:map[string]string{io.kubernetes.container.hash: cbd9560c,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a74a5314baf7445da6f9842473ea38eff4b945436f8869fbc60fe8feceb5079,PodSandboxId:24ed64471618d820f7bc1c61eab4c7bf5deb2b82f23797487f118f3b2b4d71cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761601963639147810,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8e9adf6-9ebd-4271-b241-a112a7898205,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"cont
ainerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f9a33ba953b1bd67117fb73ee83165ea9478a96b5b1b5c42843a476129afa3,PodSandboxId:beec0a1558917665d12e61b086bca6f2d5f0e8953d54564ad7328195eaf7e3ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761601920432272100,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88657516-1699-4de3-80c1-13dffabfc378,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88efe19bb8368ec417b76990932b83673d0ffa7a5a00d39e752f5cd42affb6b6,PodSandboxId:05a0f2dd2d3135f7e6776542d9d3218ff5bb835851620a24908ec51dbaef51ee,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761601890848452335,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-v4xd6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8be4230-9e49-4563-ae7f-5446af4a2657,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4a491990b8610e3e56922ef81931d3899638564a6f14cde2b9cd2abbedbc8c90,PodSandboxId:3e79a834c3d17c76df6ed5bef2b88fd7d57c77d29ebb00ef4114328e2fcece47,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa9
3917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601882617402649,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pp5nk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70d7bfd4-fc74-4f1c-997c-f8a090a71b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f196b1418941bc7c05580d855dc48ff0a1625e1d754ece9e2a7e23938f5d16c,PodSandboxId:1eaaf81a0f4a3fc53d64dda4ca66a6a14445d411a8f18ec0ffd5bc6489a78c79,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761601873811903134,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qg6jb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea477d37-e2ca-468e-a444-07fcd1d2b6e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3f33f008af52f9926a72b6aef50f48c4dd2adcd16ca14c362c4c0a5b46dfb1,PodSandboxId:75a677d68b78ec81a04119266793498b924c39c451f4ff3017a835cce9f14f3e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761601857230318263,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhhrx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 419b3ce9-a1a3-4f6d-881d-b766bf86b6f9,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b062b3a01eb1068e1c00adcf5d5dcaef17094f5735eb337c5095b7b8297d5a9,PodSandboxId:02b51b944864ed77a4ea3ce65172b191c14625055f07e8b0d9959eba2267af06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0
cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761601844294795641,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f85e553-b909-45c3-a6a4-67606540769d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe72f6cade441dcfefd723121530ca56b6d795480b43ed7b20eac064465accd5,PodSandboxId:bfed73207b8dcc9e7603a361c2141a2738c786493ddf72f99d28f0a7132513e5,Me
tadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761601818030453830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7142b212-9d5b-41df-b0fc-2e0642bd74eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b347db803db76104f5b8e22b6f58b7d2c66f63dfd5e3566e6923baee32ebae,PodSandboxId:0b314c1d51fc2018b4b296f65dcd85d26a42bfe40d160a05f95f900ee008e151,Metadata:&Cont
ainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761601816411210436,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zwgdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e60fd84-e823-4898-9d5f-51c25e535361,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53fad5c69865baf869b3e31d3c2e70270d814d5ddeb04fab61b1f5cc55c4d62,PodSandboxId:992843860b59dec8cab959f94fa3e77386d2f09
c4c6a4ed7bb75a82f25b8154f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761601808256719158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-68w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7175f5-e708-447a-af89-f17a2745c753,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:256166291e6d99bae322cd4b5a441e93b35645f49374c60393928f0ee63252d6,PodSandboxId:e778278745b7acbadf2416ae57829308f898a04f518a860f62ba7b6d97a65c05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761601807215110107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7z9xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15c180ba-494d-4c35-af5c-b5239408cd66,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5272762e20f7a319c335849c1f936bc5cddb087c3e7528705bfad3931973f,PodSandboxId:49b07f4622e3430c51ce0053178e044d946484a96bc80366264b1d4e3c500380,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761601795322113317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1782bc8a5930cfc5f91381a7b5ed0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.
ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca04a08197cbf500d76ab372ede4837038c7cdc0d0ecf2d131cd355bbfdb16a,PodSandboxId:1fe2d888c36b1c1b824bcee5d4ce8a651259c83ae8aa1773f087ea6df2d96bf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761601795331111577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 512e9579c013c3f78de9a1
8183460d79,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c33bd4f5fe3881186ba321c711231ddc3b9a2c90625be996a7398b25a0dddc7,PodSandboxId:55a74b68518db50a5cff49e1c9a42f705d12d170520446d0a25a9a06a5fc964c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761601795308910315,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedu
ler-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5c414c1431b3b7b03b86ef9da2c23f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47ef2929025d66c27a6165d3458d693292b4f9b0d8583bd9d27e8131714b9db,PodSandboxId:a7f282c54f8b222436f420aad47039d1ea23dd7501f113e9b641ad20284fc91c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761601795290552395,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-865238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d343b52f2b769af26137db37052fa47f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20cc3507-cd85-4609-83a2-f500d560a8fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df7d908c7934f       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605             45 seconds ago      Running             registry-creds            0                   49eaef377dfe2       registry-creds-764b6fb674-jpqnm
	4a74a5314baf7       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   24ed64471618d       nginx
	22f9a33ba953b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   beec0a1558917       busybox
	88efe19bb8368       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   05a0f2dd2d313       ingress-nginx-controller-675c5ddd98-v4xd6
	4a491990b8610       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             3 minutes ago       Exited              patch                     2                   3e79a834c3d17       ingress-nginx-admission-patch-pp5nk
	0f196b1418941       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   1eaaf81a0f4a3       ingress-nginx-admission-create-qg6jb
	5f3f33f008af5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   75a677d68b78e       gadget-jhhrx
	6b062b3a01eb1       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   02b51b944864e       kube-ingress-dns-minikube
	fe72f6cade441       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   bfed73207b8dc       storage-provisioner
	b8b347db803db       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   0b314c1d51fc2       amd-gpu-device-plugin-zwgdd
	b53fad5c69865       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   992843860b59d       coredns-66bc5c9577-68w47
	256166291e6d9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   e778278745b7a       kube-proxy-7z9xg
	aca04a08197cb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   1fe2d888c36b1       kube-apiserver-addons-865238
	8ad5272762e20       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   49b07f4622e34       kube-controller-manager-addons-865238
	7c33bd4f5fe38       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   55a74b68518db       kube-scheduler-addons-865238
	d47ef2929025d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   a7f282c54f8b2       etcd-addons-865238
	
	
	==> coredns [b53fad5c69865baf869b3e31d3c2e70270d814d5ddeb04fab61b1f5cc55c4d62] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:36021 - 61660 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000329775s
	[INFO] 10.244.0.23:34473 - 4419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108904s
	[INFO] 10.244.0.23:55165 - 19059 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000216423s
	[INFO] 10.244.0.23:58706 - 4573 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000774283s
	[INFO] 10.244.0.23:44746 - 56155 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094994s
	[INFO] 10.244.0.23:42401 - 11203 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000359183s
	[INFO] 10.244.0.23:36331 - 37978 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00630629s
	[INFO] 10.244.0.23:41341 - 5219 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004301282s
	[INFO] 10.244.0.28:58888 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0013471s
	[INFO] 10.244.0.28:34787 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000359997s
	[INFO] 10.244.0.33:36596 - 12671 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000779295s
	[INFO] 10.244.0.33:52713 - 12777 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000454732s
	[INFO] 10.244.0.33:34789 - 15182 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000518851s
	[INFO] 10.244.0.33:54244 - 23439 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000871941s
	[INFO] 10.244.0.33:54890 - 2467 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000122457s
	[INFO] 10.244.0.33:51367 - 61544 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000080523s
	[INFO] 10.244.0.33:60496 - 29002 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00327778s
	[INFO] 10.244.0.33:35393 - 42283 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.004132017s
	
	
	==> describe nodes <==
	Name:               addons-865238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-865238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=addons-865238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T21_50_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-865238
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 21:49:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-865238
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 21:55:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 21:54:36 +0000   Mon, 27 Oct 2025 21:49:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 21:54:36 +0000   Mon, 27 Oct 2025 21:49:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 21:54:36 +0000   Mon, 27 Oct 2025 21:49:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 21:54:36 +0000   Mon, 27 Oct 2025 21:50:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    addons-865238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c835d19650b4229ba303221fdc749a7
	  System UUID:                0c835d19-650b-4229-ba30-3221fdc749a7
	  Boot ID:                    585bf945-8819-490b-a5d7-599850d20833
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-world-app-5d498dc89-g9v8p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-jhhrx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-v4xd6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-zwgdd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-68w47                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m2s
	  kube-system                 etcd-addons-865238                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m7s
	  kube-system                 kube-apiserver-addons-865238                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-addons-865238        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-7z9xg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-addons-865238                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 registry-creds-764b6fb674-jpqnm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node addons-865238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node addons-865238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node addons-865238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m7s                   kubelet          Node addons-865238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s                   kubelet          Node addons-865238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s                   kubelet          Node addons-865238 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m6s                   kubelet          Node addons-865238 status is now: NodeReady
	  Normal  RegisteredNode           5m3s                   node-controller  Node addons-865238 event: Registered Node addons-865238 in Controller
	
	
	==> dmesg <==
	[  +5.342540] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.669127] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.528106] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.110335] kauditd_printk_skb: 17 callbacks suppressed
	[Oct27 21:51] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.095434] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.264685] kauditd_printk_skb: 81 callbacks suppressed
	[  +3.667014] kauditd_printk_skb: 155 callbacks suppressed
	[  +3.827952] kauditd_printk_skb: 75 callbacks suppressed
	[  +4.650370] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.979255] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000066] kauditd_printk_skb: 2 callbacks suppressed
	[Oct27 21:52] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000398] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000073] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.755205] kauditd_printk_skb: 118 callbacks suppressed
	[  +1.824533] kauditd_printk_skb: 189 callbacks suppressed
	[  +4.961504] kauditd_printk_skb: 48 callbacks suppressed
	[  +0.374858] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.613034] kauditd_printk_skb: 26 callbacks suppressed
	[Oct27 21:53] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000086] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.831843] kauditd_printk_skb: 41 callbacks suppressed
	[Oct27 21:54] kauditd_printk_skb: 127 callbacks suppressed
	[Oct27 21:55] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [d47ef2929025d66c27a6165d3458d693292b4f9b0d8583bd9d27e8131714b9db] <==
	{"level":"warn","ts":"2025-10-27T21:51:28.015923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.709104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-10-27T21:51:28.015948Z","caller":"traceutil/trace.go:172","msg":"trace[600617793] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1184; }","duration":"107.751146ms","start":"2025-10-27T21:51:27.908186Z","end":"2025-10-27T21:51:28.015937Z","steps":["trace[600617793] 'agreement among raft nodes before linearized reading'  (duration: 107.624985ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:51:28.016385Z","caller":"traceutil/trace.go:172","msg":"trace[521109555] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"178.98232ms","start":"2025-10-27T21:51:27.837391Z","end":"2025-10-27T21:51:28.016374Z","steps":["trace[521109555] 'process raft request'  (duration: 178.859769ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:51:28.018724Z","caller":"traceutil/trace.go:172","msg":"trace[1080081780] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"179.149162ms","start":"2025-10-27T21:51:27.839563Z","end":"2025-10-27T21:51:28.018712Z","steps":["trace[1080081780] 'process raft request'  (duration: 179.065483ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T21:51:32.337955Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.554788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T21:51:32.338000Z","caller":"traceutil/trace.go:172","msg":"trace[1001148591] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1202; }","duration":"216.610161ms","start":"2025-10-27T21:51:32.121380Z","end":"2025-10-27T21:51:32.337990Z","steps":["trace[1001148591] 'range keys from in-memory index tree'  (duration: 216.495365ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:51:37.771749Z","caller":"traceutil/trace.go:172","msg":"trace[481674199] linearizableReadLoop","detail":"{readStateIndex:1258; appliedIndex:1258; }","duration":"172.420197ms","start":"2025-10-27T21:51:37.599306Z","end":"2025-10-27T21:51:37.771726Z","steps":["trace[481674199] 'read index received'  (duration: 172.413501ms)","trace[481674199] 'applied index is now lower than readState.Index'  (duration: 5.604µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T21:51:37.771895Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.593525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T21:51:37.771916Z","caller":"traceutil/trace.go:172","msg":"trace[42264984] range","detail":"{range_begin:/registry/horizontalpodautoscalers; range_end:; response_count:0; response_revision:1226; }","duration":"172.632145ms","start":"2025-10-27T21:51:37.599277Z","end":"2025-10-27T21:51:37.771909Z","steps":["trace[42264984] 'agreement among raft nodes before linearized reading'  (duration: 172.559527ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T21:51:37.772728Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.373665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T21:51:37.773085Z","caller":"traceutil/trace.go:172","msg":"trace[99290169] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1227; }","duration":"130.731039ms","start":"2025-10-27T21:51:37.642336Z","end":"2025-10-27T21:51:37.773067Z","steps":["trace[99290169] 'agreement among raft nodes before linearized reading'  (duration: 130.346517ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:51:37.772865Z","caller":"traceutil/trace.go:172","msg":"trace[519784479] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"187.914482ms","start":"2025-10-27T21:51:37.584835Z","end":"2025-10-27T21:51:37.772750Z","steps":["trace[519784479] 'process raft request'  (duration: 187.51775ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:52:26.545121Z","caller":"traceutil/trace.go:172","msg":"trace[211704344] transaction","detail":"{read_only:false; response_revision:1444; number_of_response:1; }","duration":"120.305183ms","start":"2025-10-27T21:52:26.424782Z","end":"2025-10-27T21:52:26.545088Z","steps":["trace[211704344] 'process raft request'  (duration: 120.147309ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:52:33.667392Z","caller":"traceutil/trace.go:172","msg":"trace[1079433747] transaction","detail":"{read_only:false; response_revision:1530; number_of_response:1; }","duration":"301.002811ms","start":"2025-10-27T21:52:33.366378Z","end":"2025-10-27T21:52:33.667381Z","steps":["trace[1079433747] 'process raft request'  (duration: 300.898496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T21:52:33.667533Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"282.989824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-10-27T21:52:33.667584Z","caller":"traceutil/trace.go:172","msg":"trace[494321599] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1530; }","duration":"283.065423ms","start":"2025-10-27T21:52:33.384507Z","end":"2025-10-27T21:52:33.667573Z","steps":["trace[494321599] 'agreement among raft nodes before linearized reading'  (duration: 282.913981ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T21:52:33.667656Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T21:52:33.366361Z","time spent":"301.087332ms","remote":"127.0.0.1:57216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3755,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/registry-6b586f9694-9j6vm\" mod_revision:1528 > success:<request_put:<key:\"/registry/pods/kube-system/registry-6b586f9694-9j6vm\" value_size:3695 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-6b586f9694-9j6vm\" > >"}
	{"level":"info","ts":"2025-10-27T21:52:33.667362Z","caller":"traceutil/trace.go:172","msg":"trace[1361994060] linearizableReadLoop","detail":"{readStateIndex:1576; appliedIndex:1576; }","duration":"282.787204ms","start":"2025-10-27T21:52:33.384512Z","end":"2025-10-27T21:52:33.667299Z","steps":["trace[1361994060] 'read index received'  (duration: 282.780652ms)","trace[1361994060] 'applied index is now lower than readState.Index'  (duration: 5.668µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T21:52:33.668012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.262466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"warn","ts":"2025-10-27T21:52:33.668016Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.631938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T21:52:33.668039Z","caller":"traceutil/trace.go:172","msg":"trace[1575012332] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1530; }","duration":"277.289285ms","start":"2025-10-27T21:52:33.390738Z","end":"2025-10-27T21:52:33.668027Z","steps":["trace[1575012332] 'agreement among raft nodes before linearized reading'  (duration: 277.200687ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:52:33.668039Z","caller":"traceutil/trace.go:172","msg":"trace[1742707955] range","detail":"{range_begin:/registry/rolebindings; range_end:; response_count:0; response_revision:1530; }","duration":"171.656818ms","start":"2025-10-27T21:52:33.496377Z","end":"2025-10-27T21:52:33.668033Z","steps":["trace[1742707955] 'agreement among raft nodes before linearized reading'  (duration: 171.613998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T21:52:33.668128Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.441154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T21:52:33.668145Z","caller":"traceutil/trace.go:172","msg":"trace[140346658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1530; }","duration":"213.458775ms","start":"2025-10-27T21:52:33.454682Z","end":"2025-10-27T21:52:33.668141Z","steps":["trace[140346658] 'agreement among raft nodes before linearized reading'  (duration: 213.4312ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T21:52:56.981582Z","caller":"traceutil/trace.go:172","msg":"trace[1813028233] transaction","detail":"{read_only:false; response_revision:1688; number_of_response:1; }","duration":"189.113711ms","start":"2025-10-27T21:52:56.792444Z","end":"2025-10-27T21:52:56.981558Z","steps":["trace[1813028233] 'process raft request'  (duration: 189.014505ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:55:08 up 5 min,  0 users,  load average: 0.31, 1.04, 0.59
	Linux addons-865238 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Oct 25 21:00:46 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [aca04a08197cbf500d76ab372ede4837038c7cdc0d0ecf2d131cd355bbfdb16a] <==
	 > logger="UnhandledError"
	I1027 21:50:59.624364       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 21:50:59.643950       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1027 21:52:07.803088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.175:8443->192.168.39.1:58426: use of closed network connection
	E1027 21:52:08.040229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.175:8443->192.168.39.1:58448: use of closed network connection
	I1027 21:52:17.727980       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.134.140"}
	I1027 21:52:39.964688       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 21:52:40.146092       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.173.82"}
	E1027 21:52:48.488039       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1027 21:53:00.581195       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1027 21:53:04.705990       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1027 21:53:21.614112       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1027 21:53:21.614255       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1027 21:53:21.637668       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1027 21:53:21.637720       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1027 21:53:21.649165       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1027 21:53:21.649298       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1027 21:53:21.693293       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1027 21:53:21.693349       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1027 21:53:21.747177       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1027 21:53:21.747244       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1027 21:53:22.640575       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1027 21:53:22.748406       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1027 21:53:22.779275       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1027 21:55:06.614154       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.83.180"}
	
	
	==> kube-controller-manager [8ad5272762e20f7a319c335849c1f936bc5cddb087c3e7528705bfad3931973f] <==
	E1027 21:53:30.221635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:53:30.536809       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:30.538011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1027 21:53:35.489749       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 21:53:35.489784       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 21:53:35.573664       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 21:53:35.573709       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 21:53:38.343585       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:38.344741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:53:39.272671       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:39.273742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:53:40.837186       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:40.838370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:53:56.182220       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:56.183413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:53:57.674700       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:53:57.676015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:54:00.400168       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:54:00.401506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:54:35.037702       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:54:35.038945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:54:39.360232       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:54:39.361724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1027 21:54:44.504756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1027 21:54:44.505893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [256166291e6d99bae322cd4b5a441e93b35645f49374c60393928f0ee63252d6] <==
	I1027 21:50:07.880138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 21:50:08.011465       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 21:50:08.012408       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.175"]
	E1027 21:50:08.025743       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 21:50:08.215803       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 21:50:08.215935       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 21:50:08.215989       1 server_linux.go:132] "Using iptables Proxier"
	I1027 21:50:08.243728       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 21:50:08.247545       1 server.go:527] "Version info" version="v1.34.1"
	I1027 21:50:08.249856       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 21:50:08.290999       1 config.go:200] "Starting service config controller"
	I1027 21:50:08.291041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 21:50:08.291057       1 config.go:106] "Starting endpoint slice config controller"
	I1027 21:50:08.291061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 21:50:08.291085       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 21:50:08.291090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 21:50:08.293543       1 config.go:309] "Starting node config controller"
	I1027 21:50:08.293575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 21:50:08.293582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 21:50:08.391360       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 21:50:08.391432       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 21:50:08.391463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7c33bd4f5fe3881186ba321c711231ddc3b9a2c90625be996a7398b25a0dddc7] <==
	E1027 21:49:58.464917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 21:49:58.464961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 21:49:58.465088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 21:49:58.465156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 21:49:58.465192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 21:49:58.465246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 21:49:58.465293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:49:58.465321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 21:49:58.465356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 21:49:58.465429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 21:49:59.291227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 21:49:59.332877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 21:49:59.365744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 21:49:59.380126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 21:49:59.406738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 21:49:59.464973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:49:59.532752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 21:49:59.547224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 21:49:59.556709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 21:49:59.593367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 21:49:59.685170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 21:49:59.694219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 21:49:59.720797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 21:49:59.842737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1027 21:50:01.151740       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 21:53:41 addons-865238 kubelet[1517]: E1027 21:53:41.983101    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602021982777834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:53:41 addons-865238 kubelet[1517]: E1027 21:53:41.983146    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602021982777834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:53:51 addons-865238 kubelet[1517]: E1027 21:53:51.987965    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602031987284888  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:53:51 addons-865238 kubelet[1517]: E1027 21:53:51.987992    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602031987284888  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:01 addons-865238 kubelet[1517]: E1027 21:54:01.991557    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602041990899289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:01 addons-865238 kubelet[1517]: E1027 21:54:01.991899    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602041990899289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:11 addons-865238 kubelet[1517]: E1027 21:54:11.996016    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602051995445214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:11 addons-865238 kubelet[1517]: E1027 21:54:11.996071    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602051995445214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:17 addons-865238 kubelet[1517]: I1027 21:54:17.575920    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zwgdd" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:54:20 addons-865238 kubelet[1517]: I1027 21:54:20.776243    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jpqnm" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:54:22 addons-865238 kubelet[1517]: E1027 21:54:22.002162    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602061999985435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:22 addons-865238 kubelet[1517]: E1027 21:54:22.002556    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602061999985435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 27 21:54:23 addons-865238 kubelet[1517]: I1027 21:54:23.251858    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jpqnm" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:54:24 addons-865238 kubelet[1517]: I1027 21:54:24.257827    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jpqnm" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:54:32 addons-865238 kubelet[1517]: E1027 21:54:32.005784    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602072005325882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:54:32 addons-865238 kubelet[1517]: E1027 21:54:32.005854    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602072005325882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:54:35 addons-865238 kubelet[1517]: I1027 21:54:35.584455    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:54:42 addons-865238 kubelet[1517]: E1027 21:54:42.009828    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602082009225833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:54:42 addons-865238 kubelet[1517]: E1027 21:54:42.009877    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602082009225833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:54:52 addons-865238 kubelet[1517]: E1027 21:54:52.012030    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602092011432462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:54:52 addons-865238 kubelet[1517]: E1027 21:54:52.012055    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602092011432462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:55:02 addons-865238 kubelet[1517]: E1027 21:55:02.016315    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761602102015947169  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:55:02 addons-865238 kubelet[1517]: E1027 21:55:02.016407    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761602102015947169  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598771}  inodes_used:{value:205}}"
	Oct 27 21:55:06 addons-865238 kubelet[1517]: I1027 21:55:06.505384    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-jpqnm" podStartSLOduration=294.632165077 podStartE2EDuration="4m56.505367739s" podCreationTimestamp="2025-10-27 21:50:10 +0000 UTC" firstStartedPulling="2025-10-27 21:54:21.023064267 +0000 UTC m=+259.611348856" lastFinishedPulling="2025-10-27 21:54:22.89626694 +0000 UTC m=+261.484551518" observedRunningTime="2025-10-27 21:54:23.275174234 +0000 UTC m=+261.863458830" watchObservedRunningTime="2025-10-27 21:55:06.505367739 +0000 UTC m=+305.093652336"
	Oct 27 21:55:06 addons-865238 kubelet[1517]: I1027 21:55:06.698239    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxfc\" (UniqueName: \"kubernetes.io/projected/f8233bed-3c4e-4272-9d86-3c2e9b554098-kube-api-access-qnxfc\") pod \"hello-world-app-5d498dc89-g9v8p\" (UID: \"f8233bed-3c4e-4272-9d86-3c2e9b554098\") " pod="default/hello-world-app-5d498dc89-g9v8p"
	
	
	==> storage-provisioner [fe72f6cade441dcfefd723121530ca56b6d795480b43ed7b20eac064465accd5] <==
	W1027 21:54:43.788715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:45.793207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:45.800192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:47.803846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:47.812222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:49.816368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:49.822746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:51.826960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:51.833715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:53.838704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:53.844303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:55.848807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:55.855444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:57.859967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:57.866314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:59.870356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:54:59.876440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:01.881109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:01.890388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:03.894179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:03.900146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:05.903752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:05.910014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:07.914101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:07.924699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-865238 -n addons-865238
helpers_test.go:269: (dbg) Run:  kubectl --context addons-865238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-g9v8p ingress-nginx-admission-create-qg6jb ingress-nginx-admission-patch-pp5nk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-865238 describe pod hello-world-app-5d498dc89-g9v8p ingress-nginx-admission-create-qg6jb ingress-nginx-admission-patch-pp5nk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-865238 describe pod hello-world-app-5d498dc89-g9v8p ingress-nginx-admission-create-qg6jb ingress-nginx-admission-patch-pp5nk: exit status 1 (81.505661ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-g9v8p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-865238/192.168.39.175
	Start Time:       Mon, 27 Oct 2025 21:55:06 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qnxfc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qnxfc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-g9v8p to addons-865238
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.428s (1.428s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qg6jb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pp5nk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-865238 describe pod hello-world-app-5d498dc89-g9v8p ingress-nginx-admission-create-qg6jb ingress-nginx-admission-patch-pp5nk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable ingress-dns --alsologtostderr -v=1: (1.307313337s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable ingress --alsologtostderr -v=1: (7.911816059s)
--- FAIL: TestAddons/parallel/Ingress (158.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image rm kicbase/echo-server:functional-880510 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 image rm kicbase/echo-server:functional-880510 --alsologtostderr: (3.423461366s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-880510" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.71s)

                                                
                                    
x
+
TestPreload (157.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764084 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1027 22:41:51.353948  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:58.680385  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-764084 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m37.545671468s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764084 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-764084 image pull gcr.io/k8s-minikube/busybox: (1.425115064s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-764084
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-764084: (7.291797158s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764084 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-764084 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (48.43821934s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764084 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-27 22:44:06.476908187 +0000 UTC m=+3298.084590138
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-764084 -n test-preload-764084
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764084 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-764084 logs -n 25: (1.213812779s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-451958 ssh -n multinode-451958-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ ssh     │ multinode-451958 ssh -n multinode-451958 sudo cat /home/docker/cp-test_multinode-451958-m03_multinode-451958.txt                                          │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ cp      │ multinode-451958 cp multinode-451958-m03:/home/docker/cp-test.txt multinode-451958-m02:/home/docker/cp-test_multinode-451958-m03_multinode-451958-m02.txt │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ ssh     │ multinode-451958 ssh -n multinode-451958-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ ssh     │ multinode-451958 ssh -n multinode-451958-m02 sudo cat /home/docker/cp-test_multinode-451958-m03_multinode-451958-m02.txt                                  │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ node    │ multinode-451958 node stop m03                                                                                                                            │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:30 UTC │
	│ node    │ multinode-451958 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:30 UTC │ 27 Oct 25 22:31 UTC │
	│ node    │ list -p multinode-451958                                                                                                                                  │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:31 UTC │                     │
	│ stop    │ -p multinode-451958                                                                                                                                       │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:31 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p multinode-451958 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:36 UTC │
	│ node    │ list -p multinode-451958                                                                                                                                  │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ node    │ multinode-451958 node delete m03                                                                                                                          │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ stop    │ multinode-451958 stop                                                                                                                                     │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p multinode-451958 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ node    │ list -p multinode-451958                                                                                                                                  │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ start   │ -p multinode-451958-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-451958-m02 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ start   │ -p multinode-451958-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-451958-m03 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:41 UTC │
	│ node    │ add -p multinode-451958                                                                                                                                   │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │                     │
	│ delete  │ -p multinode-451958-m03                                                                                                                                   │ multinode-451958-m03 │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │ 27 Oct 25 22:41 UTC │
	│ delete  │ -p multinode-451958                                                                                                                                       │ multinode-451958     │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │ 27 Oct 25 22:41 UTC │
	│ start   │ -p test-preload-764084 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-764084  │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │ 27 Oct 25 22:43 UTC │
	│ image   │ test-preload-764084 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-764084  │ jenkins │ v1.37.0 │ 27 Oct 25 22:43 UTC │ 27 Oct 25 22:43 UTC │
	│ stop    │ -p test-preload-764084                                                                                                                                    │ test-preload-764084  │ jenkins │ v1.37.0 │ 27 Oct 25 22:43 UTC │ 27 Oct 25 22:43 UTC │
	│ start   │ -p test-preload-764084 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-764084  │ jenkins │ v1.37.0 │ 27 Oct 25 22:43 UTC │ 27 Oct 25 22:44 UTC │
	│ image   │ test-preload-764084 image list                                                                                                                            │ test-preload-764084  │ jenkins │ v1.37.0 │ 27 Oct 25 22:44 UTC │ 27 Oct 25 22:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:43:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:43:17.894309  380066 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:43:17.894611  380066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:43:17.894624  380066 out.go:374] Setting ErrFile to fd 2...
	I1027 22:43:17.894628  380066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:43:17.894875  380066 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:43:17.895418  380066 out.go:368] Setting JSON to false
	I1027 22:43:17.896427  380066 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8745,"bootTime":1761596253,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:43:17.896535  380066 start.go:143] virtualization: kvm guest
	I1027 22:43:17.898872  380066 out.go:179] * [test-preload-764084] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:43:17.900413  380066 notify.go:221] Checking for updates...
	I1027 22:43:17.900456  380066 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:43:17.901930  380066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:43:17.903268  380066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:43:17.904583  380066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:43:17.905962  380066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:43:17.907367  380066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:43:17.909002  380066 config.go:182] Loaded profile config "test-preload-764084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 22:43:17.910985  380066 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1027 22:43:17.912228  380066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:43:17.948809  380066 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 22:43:17.950139  380066 start.go:307] selected driver: kvm2
	I1027 22:43:17.950160  380066 start.go:928] validating driver "kvm2" against &{Name:test-preload-764084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-764084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:43:17.950295  380066 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:43:17.951302  380066 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:43:17.951335  380066 cni.go:84] Creating CNI manager for ""
	I1027 22:43:17.951398  380066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:43:17.951465  380066 start.go:351] cluster config:
	{Name:test-preload-764084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-764084 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:43:17.951602  380066 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:43:17.953254  380066 out.go:179] * Starting "test-preload-764084" primary control-plane node in "test-preload-764084" cluster
	I1027 22:43:17.954561  380066 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 22:43:17.980726  380066 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1027 22:43:17.980761  380066 cache.go:59] Caching tarball of preloaded images
	I1027 22:43:17.980941  380066 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 22:43:17.983516  380066 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1027 22:43:17.984748  380066 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 22:43:18.018110  380066 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1027 22:43:18.018159  380066 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1027 22:43:20.616545  380066 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1027 22:43:20.616726  380066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/config.json ...
	I1027 22:43:20.616995  380066 start.go:360] acquireMachinesLock for test-preload-764084: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 22:43:20.617071  380066 start.go:364] duration metric: took 50.7µs to acquireMachinesLock for "test-preload-764084"
	I1027 22:43:20.617092  380066 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:43:20.617099  380066 fix.go:55] fixHost starting: 
	I1027 22:43:20.619278  380066 fix.go:113] recreateIfNeeded on test-preload-764084: state=Stopped err=<nil>
	W1027 22:43:20.619306  380066 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:43:20.621271  380066 out.go:252] * Restarting existing kvm2 VM for "test-preload-764084" ...
	I1027 22:43:20.621356  380066 main.go:143] libmachine: starting domain...
	I1027 22:43:20.621397  380066 main.go:143] libmachine: ensuring networks are active...
	I1027 22:43:20.622195  380066 main.go:143] libmachine: Ensuring network default is active
	I1027 22:43:20.622543  380066 main.go:143] libmachine: Ensuring network mk-test-preload-764084 is active
	I1027 22:43:20.622994  380066 main.go:143] libmachine: getting domain XML...
	I1027 22:43:20.624099  380066 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-764084</name>
	  <uuid>d7b9ec5d-9946-41fa-b993-4a0d3ad6b1e1</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/test-preload-764084.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:bc:a7:dd'/>
	      <source network='mk-test-preload-764084'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e3:8f:1b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 22:43:22.023535  380066 main.go:143] libmachine: waiting for domain to start...
	I1027 22:43:22.024852  380066 main.go:143] libmachine: domain is now running
	I1027 22:43:22.024872  380066 main.go:143] libmachine: waiting for IP...
	I1027 22:43:22.025673  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:22.026337  380066 main.go:143] libmachine: domain test-preload-764084 has current primary IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:22.026353  380066 main.go:143] libmachine: found domain IP: 192.168.39.194
	I1027 22:43:22.026361  380066 main.go:143] libmachine: reserving static IP address...
	I1027 22:43:22.026837  380066 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-764084", mac: "52:54:00:bc:a7:dd", ip: "192.168.39.194"} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:41:48 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:22.026882  380066 main.go:143] libmachine: skip adding static IP to network mk-test-preload-764084 - found existing host DHCP lease matching {name: "test-preload-764084", mac: "52:54:00:bc:a7:dd", ip: "192.168.39.194"}
	I1027 22:43:22.026936  380066 main.go:143] libmachine: reserved static IP address 192.168.39.194 for domain test-preload-764084
	I1027 22:43:22.026958  380066 main.go:143] libmachine: waiting for SSH...
	I1027 22:43:22.026969  380066 main.go:143] libmachine: Getting to WaitForSSH function...
	I1027 22:43:22.029160  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:22.029629  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:41:48 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:22.029658  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:22.029992  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:22.030304  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:22.030324  380066 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1027 22:43:25.112178  380066 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.194:22: connect: no route to host
	I1027 22:43:31.193201  380066 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.194:22: connect: no route to host
	I1027 22:43:34.195100  380066 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.194:22: connect: connection refused
	I1027 22:43:37.302737  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:43:37.306966  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.307654  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.307697  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.308023  380066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/config.json ...
	I1027 22:43:37.308278  380066 machine.go:94] provisionDockerMachine start ...
	I1027 22:43:37.310797  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.311214  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.311251  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.311446  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:37.311686  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:37.311702  380066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:43:37.415212  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 22:43:37.415256  380066 buildroot.go:166] provisioning hostname "test-preload-764084"
	I1027 22:43:37.418493  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.419001  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.419030  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.419244  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:37.419522  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:37.419539  380066 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-764084 && echo "test-preload-764084" | sudo tee /etc/hostname
	I1027 22:43:37.542457  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-764084
	
	I1027 22:43:37.545803  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.546334  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.546368  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.546657  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:37.546919  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:37.546942  380066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-764084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-764084/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-764084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:43:37.662008  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:43:37.662049  380066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 22:43:37.662089  380066 buildroot.go:174] setting up certificates
	I1027 22:43:37.662105  380066 provision.go:84] configureAuth start
	I1027 22:43:37.665365  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.665754  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.665825  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.668186  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.668588  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.668609  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.668775  380066 provision.go:143] copyHostCerts
	I1027 22:43:37.668829  380066 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem, removing ...
	I1027 22:43:37.668851  380066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem
	I1027 22:43:37.668949  380066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 22:43:37.669057  380066 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem, removing ...
	I1027 22:43:37.669065  380066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem
	I1027 22:43:37.669091  380066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 22:43:37.669159  380066 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem, removing ...
	I1027 22:43:37.669168  380066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem
	I1027 22:43:37.669196  380066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 22:43:37.669690  380066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.test-preload-764084 san=[127.0.0.1 192.168.39.194 localhost minikube test-preload-764084]
	I1027 22:43:37.755687  380066 provision.go:177] copyRemoteCerts
	I1027 22:43:37.755755  380066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:43:37.758585  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.759073  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.759100  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.759241  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:37.843142  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:43:37.877465  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 22:43:37.911517  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:43:37.945192  380066 provision.go:87] duration metric: took 283.070076ms to configureAuth
	I1027 22:43:37.945231  380066 buildroot.go:189] setting minikube options for container-runtime
	I1027 22:43:37.945479  380066 config.go:182] Loaded profile config "test-preload-764084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 22:43:37.948807  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.949287  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:37.949312  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:37.949577  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:37.949835  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:37.949853  380066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:43:38.214775  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:43:38.214801  380066 machine.go:97] duration metric: took 906.507741ms to provisionDockerMachine
	I1027 22:43:38.214812  380066 start.go:293] postStartSetup for "test-preload-764084" (driver="kvm2")
	I1027 22:43:38.214821  380066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:43:38.214906  380066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:43:38.218187  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.218694  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:38.218725  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.218918  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:38.302009  380066 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:43:38.307733  380066 info.go:137] Remote host: Buildroot 2025.02
	I1027 22:43:38.307764  380066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 22:43:38.307859  380066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 22:43:38.307997  380066 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem -> 3566212.pem in /etc/ssl/certs
	I1027 22:43:38.308104  380066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:43:38.323918  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:43:38.357226  380066 start.go:296] duration metric: took 142.398675ms for postStartSetup
	I1027 22:43:38.357281  380066 fix.go:57] duration metric: took 17.740180934s for fixHost
	I1027 22:43:38.360207  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.360735  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:38.360769  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.360960  380066 main.go:143] libmachine: Using SSH client type: native
	I1027 22:43:38.361198  380066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1027 22:43:38.361212  380066 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 22:43:38.464140  380066 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761605018.415933050
	
	I1027 22:43:38.464161  380066 fix.go:217] guest clock: 1761605018.415933050
	I1027 22:43:38.464170  380066 fix.go:230] Guest: 2025-10-27 22:43:38.41593305 +0000 UTC Remote: 2025-10-27 22:43:38.357287377 +0000 UTC m=+20.514456727 (delta=58.645673ms)
	I1027 22:43:38.464187  380066 fix.go:201] guest clock delta is within tolerance: 58.645673ms
	I1027 22:43:38.464192  380066 start.go:83] releasing machines lock for "test-preload-764084", held for 17.847110857s
	I1027 22:43:38.467140  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.467526  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:38.467560  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.468233  380066 ssh_runner.go:195] Run: cat /version.json
	I1027 22:43:38.468344  380066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:43:38.471367  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.471451  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.471801  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:38.471825  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.471923  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:38.471960  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:38.471982  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:38.472163  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:38.556254  380066 ssh_runner.go:195] Run: systemctl --version
	I1027 22:43:38.605306  380066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:43:38.757430  380066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:43:38.767117  380066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:43:38.767221  380066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:43:38.795454  380066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:43:38.795487  380066 start.go:496] detecting cgroup driver to use...
	I1027 22:43:38.795567  380066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:43:38.818413  380066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:43:38.839414  380066 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:43:38.839480  380066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:43:38.859028  380066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:43:38.877357  380066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:43:39.039610  380066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:43:39.270914  380066 docker.go:234] disabling docker service ...
	I1027 22:43:39.271007  380066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:43:39.290368  380066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:43:39.307532  380066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:43:39.472140  380066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:43:39.626613  380066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:43:39.659509  380066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:43:39.686919  380066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1027 22:43:39.687002  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.701110  380066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:43:39.701178  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.715587  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.730005  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.744224  380066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:43:39.759531  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.774105  380066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.798101  380066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:43:39.812153  380066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:43:39.824322  380066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 22:43:39.824389  380066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 22:43:39.847908  380066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:43:39.862016  380066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:43:40.011251  380066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:43:40.143994  380066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:43:40.144086  380066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:43:40.150386  380066 start.go:564] Will wait 60s for crictl version
	I1027 22:43:40.150466  380066 ssh_runner.go:195] Run: which crictl
	I1027 22:43:40.155151  380066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 22:43:40.201323  380066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 22:43:40.201436  380066 ssh_runner.go:195] Run: crio --version
	I1027 22:43:40.234135  380066 ssh_runner.go:195] Run: crio --version
	I1027 22:43:40.267662  380066 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1027 22:43:40.271657  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:40.272071  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:40.272095  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:40.272290  380066 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 22:43:40.277526  380066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:43:40.294307  380066 kubeadm.go:884] updating cluster {Name:test-preload-764084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-764084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:43:40.294419  380066 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 22:43:40.294481  380066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:43:40.340219  380066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1027 22:43:40.340298  380066 ssh_runner.go:195] Run: which lz4
	I1027 22:43:40.345172  380066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 22:43:40.351004  380066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 22:43:40.351049  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1027 22:43:42.054490  380066 crio.go:462] duration metric: took 1.709355306s to copy over tarball
	I1027 22:43:42.054575  380066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 22:43:43.871498  380066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.816892913s)
	I1027 22:43:43.871534  380066 crio.go:469] duration metric: took 1.817009062s to extract the tarball
	I1027 22:43:43.871551  380066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 22:43:43.913152  380066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:43:43.958419  380066 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:43:43.958445  380066 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:43:43.958453  380066 kubeadm.go:935] updating node { 192.168.39.194 8443 v1.32.0 crio true true} ...
	I1027 22:43:43.958561  380066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-764084 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-764084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:43:43.958634  380066 ssh_runner.go:195] Run: crio config
	I1027 22:43:44.007272  380066 cni.go:84] Creating CNI manager for ""
	I1027 22:43:44.007296  380066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:43:44.007335  380066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:43:44.007358  380066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-764084 NodeName:test-preload-764084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:43:44.007473  380066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-764084"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.194"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:43:44.007534  380066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1027 22:43:44.021017  380066 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:43:44.021104  380066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:43:44.034095  380066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1027 22:43:44.056991  380066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:43:44.080210  380066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1027 22:43:44.105557  380066 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I1027 22:43:44.110618  380066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:43:44.127494  380066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:43:44.280785  380066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:43:44.301580  380066 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084 for IP: 192.168.39.194
	I1027 22:43:44.301603  380066 certs.go:195] generating shared ca certs ...
	I1027 22:43:44.301620  380066 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:43:44.301787  380066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 22:43:44.301859  380066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 22:43:44.301877  380066 certs.go:257] generating profile certs ...
	I1027 22:43:44.302035  380066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.key
	I1027 22:43:44.302114  380066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/apiserver.key.c1c6aa63
	I1027 22:43:44.302161  380066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/proxy-client.key
	I1027 22:43:44.302298  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 22:43:44.302341  380066 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 22:43:44.302356  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:43:44.302390  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:43:44.302420  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:43:44.302452  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 22:43:44.302518  380066 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:43:44.303146  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:43:44.355986  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:43:44.396700  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:43:44.431799  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 22:43:44.465133  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 22:43:44.497118  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:43:44.529001  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:43:44.561214  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:43:44.595114  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:43:44.627823  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 22:43:44.659961  380066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 22:43:44.692766  380066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:43:44.716743  380066 ssh_runner.go:195] Run: openssl version
	I1027 22:43:44.724146  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:43:44.739160  380066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:43:44.745047  380066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:43:44.745129  380066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:43:44.753528  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:43:44.768239  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 22:43:44.783099  380066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 22:43:44.788867  380066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 22:43:44.788969  380066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 22:43:44.797014  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 22:43:44.811884  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 22:43:44.826989  380066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 22:43:44.832932  380066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 22:43:44.833000  380066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 22:43:44.840928  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:43:44.855182  380066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:43:44.861227  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:43:44.869489  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:43:44.877942  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:43:44.886658  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:43:44.895428  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:43:44.903968  380066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:43:44.912531  380066 kubeadm.go:401] StartCluster: {Name:test-preload-764084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-764084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:43:44.912617  380066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:43:44.912704  380066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:43:44.954765  380066 cri.go:89] found id: ""
	I1027 22:43:44.954846  380066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:43:44.968205  380066 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:43:44.968227  380066 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:43:44.968277  380066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:43:44.981471  380066 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:43:44.981955  380066 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-764084" does not appear in /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:43:44.982074  380066 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-352679/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-764084" cluster setting kubeconfig missing "test-preload-764084" context setting]
	I1027 22:43:44.982326  380066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:43:44.982898  380066 kapi.go:59] client config for test-preload-764084: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.key", CAFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:43:44.983313  380066 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:43:44.983330  380066 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:43:44.983335  380066 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:43:44.983339  380066 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:43:44.983344  380066 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:43:44.983724  380066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:43:44.996254  380066 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.194
	I1027 22:43:44.996298  380066 kubeadm.go:1161] stopping kube-system containers ...
	I1027 22:43:44.996315  380066 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 22:43:44.996371  380066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:43:45.053792  380066 cri.go:89] found id: ""
	I1027 22:43:45.053862  380066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 22:43:45.084739  380066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:43:45.098229  380066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:43:45.098249  380066 kubeadm.go:158] found existing configuration files:
	
	I1027 22:43:45.098308  380066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:43:45.110596  380066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:43:45.110666  380066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:43:45.123869  380066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:43:45.136166  380066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:43:45.136234  380066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:43:45.149366  380066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:43:45.160868  380066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:43:45.160957  380066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:43:45.173641  380066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:43:45.185559  380066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:43:45.185664  380066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:43:45.198802  380066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:43:45.212699  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:45.281797  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:46.309969  380066 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.028128208s)
	I1027 22:43:46.310058  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:46.588560  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:46.673835  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:46.758801  380066 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:43:46.758924  380066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:43:47.259856  380066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:43:47.759249  380066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:43:48.259220  380066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:43:48.293460  380066 api_server.go:72] duration metric: took 1.534670633s to wait for apiserver process to appear ...
	I1027 22:43:48.293498  380066 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:43:48.293532  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:50.980157  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:43:50.980192  380066 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:43:50.980213  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:51.066530  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:43:51.066564  380066 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:43:51.293822  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:51.299226  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:43:51.299257  380066 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:43:51.793941  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:51.805759  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:43:51.805794  380066 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:43:52.294535  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:52.302447  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:43:52.302487  380066 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:43:52.794226  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:43:52.799568  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I1027 22:43:52.806961  380066 api_server.go:141] control plane version: v1.32.0
	I1027 22:43:52.806996  380066 api_server.go:131] duration metric: took 4.513491229s to wait for apiserver health ...
	I1027 22:43:52.807007  380066 cni.go:84] Creating CNI manager for ""
	I1027 22:43:52.807014  380066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:43:52.809397  380066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 22:43:52.810792  380066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 22:43:52.827719  380066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 22:43:52.860311  380066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:43:52.869806  380066 system_pods.go:59] 7 kube-system pods found
	I1027 22:43:52.869868  380066 system_pods.go:61] "coredns-668d6bf9bc-tvfnf" [fd5ec91c-86a7-451a-a600-661d856d315f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:43:52.869878  380066 system_pods.go:61] "etcd-test-preload-764084" [39d9fae7-b8e5-4afb-9fa2-a1cbafcbab2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:43:52.869900  380066 system_pods.go:61] "kube-apiserver-test-preload-764084" [df91fb50-4eb2-48e0-b9de-b0548e399481] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:43:52.869914  380066 system_pods.go:61] "kube-controller-manager-test-preload-764084" [1da81488-3998-4829-aedf-7ff469da2e44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:43:52.869920  380066 system_pods.go:61] "kube-proxy-7q9vz" [eb4cb03f-4d61-4d97-800e-4a06a0e81220] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:43:52.869928  380066 system_pods.go:61] "kube-scheduler-test-preload-764084" [6341c005-6062-4a35-b93d-c01f2285abc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:43:52.869933  380066 system_pods.go:61] "storage-provisioner" [084255c5-a7b3-4db0-8d15-3ee6acd4bc21] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:43:52.869941  380066 system_pods.go:74] duration metric: took 9.601362ms to wait for pod list to return data ...
	I1027 22:43:52.869954  380066 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:43:52.879434  380066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 22:43:52.879467  380066 node_conditions.go:123] node cpu capacity is 2
	I1027 22:43:52.879479  380066 node_conditions.go:105] duration metric: took 9.521056ms to run NodePressure ...
	I1027 22:43:52.879542  380066 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:43:53.219979  380066 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1027 22:43:53.225936  380066 kubeadm.go:744] kubelet initialised
	I1027 22:43:53.225961  380066 kubeadm.go:745] duration metric: took 5.953969ms waiting for restarted kubelet to initialise ...
	I1027 22:43:53.225979  380066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:43:53.249865  380066 ops.go:34] apiserver oom_adj: -16
	I1027 22:43:53.249907  380066 kubeadm.go:602] duration metric: took 8.281673299s to restartPrimaryControlPlane
	I1027 22:43:53.249920  380066 kubeadm.go:403] duration metric: took 8.337398822s to StartCluster
	I1027 22:43:53.249936  380066 settings.go:142] acquiring lock: {Name:mk9b0cd8ae1e83c76c2473e7845967d905910c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:43:53.250022  380066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:43:53.250704  380066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:43:53.251022  380066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:43:53.251230  380066 config.go:182] Loaded profile config "test-preload-764084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 22:43:53.251155  380066 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:43:53.251272  380066 addons.go:69] Setting default-storageclass=true in profile "test-preload-764084"
	I1027 22:43:53.251316  380066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-764084"
	I1027 22:43:53.251272  380066 addons.go:69] Setting storage-provisioner=true in profile "test-preload-764084"
	I1027 22:43:53.251395  380066 addons.go:238] Setting addon storage-provisioner=true in "test-preload-764084"
	W1027 22:43:53.251425  380066 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:43:53.251464  380066 host.go:66] Checking if "test-preload-764084" exists ...
	I1027 22:43:53.252663  380066 out.go:179] * Verifying Kubernetes components...
	I1027 22:43:53.253706  380066 kapi.go:59] client config for test-preload-764084: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.key", CAFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:43:53.254034  380066 addons.go:238] Setting addon default-storageclass=true in "test-preload-764084"
	W1027 22:43:53.254055  380066 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:43:53.254061  380066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:43:53.254092  380066 host.go:66] Checking if "test-preload-764084" exists ...
	I1027 22:43:53.255319  380066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:43:53.255631  380066 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:43:53.255649  380066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:43:53.256677  380066 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:43:53.256702  380066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:43:53.258651  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:53.259096  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:53.259129  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:53.259273  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:53.259857  380066 main.go:143] libmachine: domain test-preload-764084 has defined MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:53.260313  380066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a7:dd", ip: ""} in network mk-test-preload-764084: {Iface:virbr1 ExpiryTime:2025-10-27 23:43:33 +0000 UTC Type:0 Mac:52:54:00:bc:a7:dd Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-764084 Clientid:01:52:54:00:bc:a7:dd}
	I1027 22:43:53.260349  380066 main.go:143] libmachine: domain test-preload-764084 has defined IP address 192.168.39.194 and MAC address 52:54:00:bc:a7:dd in network mk-test-preload-764084
	I1027 22:43:53.260538  380066 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/test-preload-764084/id_rsa Username:docker}
	I1027 22:43:53.496603  380066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:43:53.522455  380066 node_ready.go:35] waiting up to 6m0s for node "test-preload-764084" to be "Ready" ...
	I1027 22:43:53.700981  380066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:43:53.724185  380066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:43:54.486466  380066 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 22:43:54.487791  380066 addons.go:514] duration metric: took 1.236643554s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1027 22:43:55.527332  380066 node_ready.go:57] node "test-preload-764084" has "Ready":"False" status (will retry)
	W1027 22:43:58.027215  380066 node_ready.go:57] node "test-preload-764084" has "Ready":"False" status (will retry)
	W1027 22:44:00.029474  380066 node_ready.go:57] node "test-preload-764084" has "Ready":"False" status (will retry)
	I1027 22:44:02.026224  380066 node_ready.go:49] node "test-preload-764084" is "Ready"
	I1027 22:44:02.026268  380066 node_ready.go:38] duration metric: took 8.503748307s for node "test-preload-764084" to be "Ready" ...
	I1027 22:44:02.026286  380066 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:44:02.026354  380066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:44:02.048598  380066 api_server.go:72] duration metric: took 8.797533731s to wait for apiserver process to appear ...
	I1027 22:44:02.048628  380066 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:44:02.048648  380066 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1027 22:44:02.055098  380066 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I1027 22:44:02.056251  380066 api_server.go:141] control plane version: v1.32.0
	I1027 22:44:02.056281  380066 api_server.go:131] duration metric: took 7.646227ms to wait for apiserver health ...
	I1027 22:44:02.056290  380066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:44:02.060825  380066 system_pods.go:59] 7 kube-system pods found
	I1027 22:44:02.060855  380066 system_pods.go:61] "coredns-668d6bf9bc-tvfnf" [fd5ec91c-86a7-451a-a600-661d856d315f] Running
	I1027 22:44:02.060860  380066 system_pods.go:61] "etcd-test-preload-764084" [39d9fae7-b8e5-4afb-9fa2-a1cbafcbab2e] Running
	I1027 22:44:02.060867  380066 system_pods.go:61] "kube-apiserver-test-preload-764084" [df91fb50-4eb2-48e0-b9de-b0548e399481] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:44:02.060871  380066 system_pods.go:61] "kube-controller-manager-test-preload-764084" [1da81488-3998-4829-aedf-7ff469da2e44] Running
	I1027 22:44:02.060882  380066 system_pods.go:61] "kube-proxy-7q9vz" [eb4cb03f-4d61-4d97-800e-4a06a0e81220] Running
	I1027 22:44:02.060898  380066 system_pods.go:61] "kube-scheduler-test-preload-764084" [6341c005-6062-4a35-b93d-c01f2285abc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:44:02.060902  380066 system_pods.go:61] "storage-provisioner" [084255c5-a7b3-4db0-8d15-3ee6acd4bc21] Running
	I1027 22:44:02.060910  380066 system_pods.go:74] duration metric: took 4.613529ms to wait for pod list to return data ...
	I1027 22:44:02.060918  380066 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:44:02.064345  380066 default_sa.go:45] found service account: "default"
	I1027 22:44:02.064379  380066 default_sa.go:55] duration metric: took 3.453403ms for default service account to be created ...
	I1027 22:44:02.064409  380066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:44:02.067883  380066 system_pods.go:86] 7 kube-system pods found
	I1027 22:44:02.067924  380066 system_pods.go:89] "coredns-668d6bf9bc-tvfnf" [fd5ec91c-86a7-451a-a600-661d856d315f] Running
	I1027 22:44:02.067930  380066 system_pods.go:89] "etcd-test-preload-764084" [39d9fae7-b8e5-4afb-9fa2-a1cbafcbab2e] Running
	I1027 22:44:02.067937  380066 system_pods.go:89] "kube-apiserver-test-preload-764084" [df91fb50-4eb2-48e0-b9de-b0548e399481] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:44:02.067942  380066 system_pods.go:89] "kube-controller-manager-test-preload-764084" [1da81488-3998-4829-aedf-7ff469da2e44] Running
	I1027 22:44:02.067947  380066 system_pods.go:89] "kube-proxy-7q9vz" [eb4cb03f-4d61-4d97-800e-4a06a0e81220] Running
	I1027 22:44:02.067951  380066 system_pods.go:89] "kube-scheduler-test-preload-764084" [6341c005-6062-4a35-b93d-c01f2285abc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:44:02.067955  380066 system_pods.go:89] "storage-provisioner" [084255c5-a7b3-4db0-8d15-3ee6acd4bc21] Running
	I1027 22:44:02.067962  380066 system_pods.go:126] duration metric: took 3.545924ms to wait for k8s-apps to be running ...
	I1027 22:44:02.067972  380066 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:44:02.068019  380066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:44:02.086711  380066 system_svc.go:56] duration metric: took 18.72896ms WaitForService to wait for kubelet
	I1027 22:44:02.086746  380066 kubeadm.go:587] duration metric: took 8.835687514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:44:02.086767  380066 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:44:02.089590  380066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 22:44:02.089622  380066 node_conditions.go:123] node cpu capacity is 2
	I1027 22:44:02.089634  380066 node_conditions.go:105] duration metric: took 2.862602ms to run NodePressure ...
	I1027 22:44:02.089647  380066 start.go:242] waiting for startup goroutines ...
	I1027 22:44:02.089654  380066 start.go:247] waiting for cluster config update ...
	I1027 22:44:02.089664  380066 start.go:256] writing updated cluster config ...
	I1027 22:44:02.089990  380066 ssh_runner.go:195] Run: rm -f paused
	I1027 22:44:02.096642  380066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:44:02.097443  380066 kapi.go:59] client config for test-preload-764084: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/test-preload-764084/client.key", CAFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:44:02.101472  380066 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-tvfnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.108039  380066 pod_ready.go:94] pod "coredns-668d6bf9bc-tvfnf" is "Ready"
	I1027 22:44:02.108070  380066 pod_ready.go:86] duration metric: took 6.57283ms for pod "coredns-668d6bf9bc-tvfnf" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.111195  380066 pod_ready.go:83] waiting for pod "etcd-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.117208  380066 pod_ready.go:94] pod "etcd-test-preload-764084" is "Ready"
	I1027 22:44:02.117239  380066 pod_ready.go:86] duration metric: took 6.007364ms for pod "etcd-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.119550  380066 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.626993  380066 pod_ready.go:94] pod "kube-apiserver-test-preload-764084" is "Ready"
	I1027 22:44:02.627032  380066 pod_ready.go:86] duration metric: took 507.451573ms for pod "kube-apiserver-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.630970  380066 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:02.901335  380066 pod_ready.go:94] pod "kube-controller-manager-test-preload-764084" is "Ready"
	I1027 22:44:02.901378  380066 pod_ready.go:86] duration metric: took 270.368147ms for pod "kube-controller-manager-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:03.103127  380066 pod_ready.go:83] waiting for pod "kube-proxy-7q9vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:03.501943  380066 pod_ready.go:94] pod "kube-proxy-7q9vz" is "Ready"
	I1027 22:44:03.501974  380066 pod_ready.go:86] duration metric: took 398.808036ms for pod "kube-proxy-7q9vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:03.701674  380066 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:44:05.708347  380066 pod_ready.go:104] pod "kube-scheduler-test-preload-764084" is not "Ready", error: <nil>
	I1027 22:44:06.208487  380066 pod_ready.go:94] pod "kube-scheduler-test-preload-764084" is "Ready"
	I1027 22:44:06.208528  380066 pod_ready.go:86] duration metric: took 2.506814212s for pod "kube-scheduler-test-preload-764084" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:44:06.208545  380066 pod_ready.go:40] duration metric: took 4.111867251s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:44:06.256522  380066 start.go:626] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1027 22:44:06.258164  380066 out.go:203] 
	W1027 22:44:06.259588  380066 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1027 22:44:06.260919  380066 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1027 22:44:06.262135  380066 out.go:179] * Done! kubectl is now configured to use "test-preload-764084" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.108565520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605047108534580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7322db2e-3c32-44a2-9bec-fa6a96761252 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.109566594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dc62a6c-6ac9-4468-bfea-2dd4acb44b9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.109907596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dc62a6c-6ac9-4468-bfea-2dd4acb44b9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.110260352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f57a13696534e57ecb41cf63abc0c84fa8771d122c99019855a6332cd0678bb0,PodSandboxId:a6a3de827935f4af231a976a3411aed78e45ff9a91c6736ed2c8c9d5cd300672,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761605039759018926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tvfnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd5ec91c-86a7-451a-a600-661d856d315f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92ffdda6d0bed0b732511a785c753771561b74d6c9b08e5db6edd5164939496d,PodSandboxId:23e0a62c4e1e9ee6d05056dc456e95ff3e862051ef4a12ef49d32a1026cc554a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761605032395041114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7q9vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: eb4cb03f-4d61-4d97-800e-4a06a0e81220,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb213bbc0499a6b58f71787e71bc3ad55508fb29120426d41b28eaeb9dda6a3,PodSandboxId:afde189dd962025ed5d3014ec09ec7968b5cf5b9c48a1fe7ed411e1e2f2393e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761605032254705347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08
4255c5-a7b3-4db0-8d15-3ee6acd4bc21,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47ab4db0910f1161c48403789fa2bb636fdf14b1a00fc4cc2715b5be433b359,PodSandboxId:046de86cd0269526d2bdeaeabd03b0a6172c288596ef0ef1b0aaa370497af5cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761605027700245398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d30e569ea28242067b1c91b56aeaef54,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3c591dcf0f347225db91ba714c06fb23985e89a29f808785ecf8ebe1f2f6d,PodSandboxId:331d890c73f749e99ddcc8c4e690fe764f07e708cb779cb7adb4f7c495314619,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761605027708698826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 78ff4c98bdb3caeac097dee4dfe1cff1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f84f0caed6d06eaf012fefbe6b20cadc598c81f3bfe87cfa8b5cfb6797253,PodSandboxId:990ebe176c815c963916952349371fea5403bdd6190c4b3930a1dc96faf240b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761605027684486106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac585d648eb07d4d3c204303149adc26,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f26ea2d274a397b406640c7c0236e56199a7f57e309929c1828a8d426f58d11,PodSandboxId:193a9fa8afb5dde53559126252223720b12ce4b751e1b0de86e49c0f570fea9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761605027644029646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607702b32a619754a10f2fd77ad89cc3,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dc62a6c-6ac9-4468-bfea-2dd4acb44b9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.179063433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc8151d7-5e9c-450e-8296-5b2390c6568d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.179357063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc8151d7-5e9c-450e-8296-5b2390c6568d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.181514481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da44e892-8892-4eb3-bb17-877e514375a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.182239259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605047182193170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da44e892-8892-4eb3-bb17-877e514375a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.182975063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e4fcc91-37cb-44a4-8a63-9a5689d659db name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.183076786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e4fcc91-37cb-44a4-8a63-9a5689d659db name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.183372592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f57a13696534e57ecb41cf63abc0c84fa8771d122c99019855a6332cd0678bb0,PodSandboxId:a6a3de827935f4af231a976a3411aed78e45ff9a91c6736ed2c8c9d5cd300672,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761605039759018926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tvfnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd5ec91c-86a7-451a-a600-661d856d315f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92ffdda6d0bed0b732511a785c753771561b74d6c9b08e5db6edd5164939496d,PodSandboxId:23e0a62c4e1e9ee6d05056dc456e95ff3e862051ef4a12ef49d32a1026cc554a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761605032395041114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7q9vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: eb4cb03f-4d61-4d97-800e-4a06a0e81220,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb213bbc0499a6b58f71787e71bc3ad55508fb29120426d41b28eaeb9dda6a3,PodSandboxId:afde189dd962025ed5d3014ec09ec7968b5cf5b9c48a1fe7ed411e1e2f2393e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761605032254705347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08
4255c5-a7b3-4db0-8d15-3ee6acd4bc21,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47ab4db0910f1161c48403789fa2bb636fdf14b1a00fc4cc2715b5be433b359,PodSandboxId:046de86cd0269526d2bdeaeabd03b0a6172c288596ef0ef1b0aaa370497af5cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761605027700245398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d30e569ea28242067b1c91b56aeaef54,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3c591dcf0f347225db91ba714c06fb23985e89a29f808785ecf8ebe1f2f6d,PodSandboxId:331d890c73f749e99ddcc8c4e690fe764f07e708cb779cb7adb4f7c495314619,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761605027708698826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 78ff4c98bdb3caeac097dee4dfe1cff1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f84f0caed6d06eaf012fefbe6b20cadc598c81f3bfe87cfa8b5cfb6797253,PodSandboxId:990ebe176c815c963916952349371fea5403bdd6190c4b3930a1dc96faf240b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761605027684486106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac585d648eb07d4d3c204303149adc26,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f26ea2d274a397b406640c7c0236e56199a7f57e309929c1828a8d426f58d11,PodSandboxId:193a9fa8afb5dde53559126252223720b12ce4b751e1b0de86e49c0f570fea9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761605027644029646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607702b32a619754a10f2fd77ad89cc3,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e4fcc91-37cb-44a4-8a63-9a5689d659db name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.229012862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d540cf8-076e-4d92-8a14-0adc36f7e47f name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.229088572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d540cf8-076e-4d92-8a14-0adc36f7e47f name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.230625271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac57e0dc-b44c-4474-807b-6574b2d4b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.231102870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605047231077819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac57e0dc-b44c-4474-807b-6574b2d4b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.232092494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b1ab7f5-84b1-4111-bc24-593c3430f387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.232167476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b1ab7f5-84b1-4111-bc24-593c3430f387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.232338103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f57a13696534e57ecb41cf63abc0c84fa8771d122c99019855a6332cd0678bb0,PodSandboxId:a6a3de827935f4af231a976a3411aed78e45ff9a91c6736ed2c8c9d5cd300672,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761605039759018926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tvfnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd5ec91c-86a7-451a-a600-661d856d315f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92ffdda6d0bed0b732511a785c753771561b74d6c9b08e5db6edd5164939496d,PodSandboxId:23e0a62c4e1e9ee6d05056dc456e95ff3e862051ef4a12ef49d32a1026cc554a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761605032395041114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7q9vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: eb4cb03f-4d61-4d97-800e-4a06a0e81220,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb213bbc0499a6b58f71787e71bc3ad55508fb29120426d41b28eaeb9dda6a3,PodSandboxId:afde189dd962025ed5d3014ec09ec7968b5cf5b9c48a1fe7ed411e1e2f2393e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761605032254705347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08
4255c5-a7b3-4db0-8d15-3ee6acd4bc21,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47ab4db0910f1161c48403789fa2bb636fdf14b1a00fc4cc2715b5be433b359,PodSandboxId:046de86cd0269526d2bdeaeabd03b0a6172c288596ef0ef1b0aaa370497af5cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761605027700245398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d30e569ea28242067b1c91b56aeaef54,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3c591dcf0f347225db91ba714c06fb23985e89a29f808785ecf8ebe1f2f6d,PodSandboxId:331d890c73f749e99ddcc8c4e690fe764f07e708cb779cb7adb4f7c495314619,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761605027708698826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 78ff4c98bdb3caeac097dee4dfe1cff1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f84f0caed6d06eaf012fefbe6b20cadc598c81f3bfe87cfa8b5cfb6797253,PodSandboxId:990ebe176c815c963916952349371fea5403bdd6190c4b3930a1dc96faf240b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761605027684486106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac585d648eb07d4d3c204303149adc26,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f26ea2d274a397b406640c7c0236e56199a7f57e309929c1828a8d426f58d11,PodSandboxId:193a9fa8afb5dde53559126252223720b12ce4b751e1b0de86e49c0f570fea9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761605027644029646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607702b32a619754a10f2fd77ad89cc3,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b1ab7f5-84b1-4111-bc24-593c3430f387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.270036195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=944dd454-e0ce-41be-ae96-1a232289f979 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.270142166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=944dd454-e0ce-41be-ae96-1a232289f979 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.271387379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e1f62b9-5648-4b51-909d-b0b8b5da9ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.272082044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605047272056445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e1f62b9-5648-4b51-909d-b0b8b5da9ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.273096375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81050f32-e0d1-4ff9-8b0c-15933dbf0fc5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.273171643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81050f32-e0d1-4ff9-8b0c-15933dbf0fc5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:44:07 test-preload-764084 crio[843]: time="2025-10-27 22:44:07.273325837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f57a13696534e57ecb41cf63abc0c84fa8771d122c99019855a6332cd0678bb0,PodSandboxId:a6a3de827935f4af231a976a3411aed78e45ff9a91c6736ed2c8c9d5cd300672,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761605039759018926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tvfnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd5ec91c-86a7-451a-a600-661d856d315f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92ffdda6d0bed0b732511a785c753771561b74d6c9b08e5db6edd5164939496d,PodSandboxId:23e0a62c4e1e9ee6d05056dc456e95ff3e862051ef4a12ef49d32a1026cc554a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761605032395041114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7q9vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: eb4cb03f-4d61-4d97-800e-4a06a0e81220,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cb213bbc0499a6b58f71787e71bc3ad55508fb29120426d41b28eaeb9dda6a3,PodSandboxId:afde189dd962025ed5d3014ec09ec7968b5cf5b9c48a1fe7ed411e1e2f2393e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761605032254705347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08
4255c5-a7b3-4db0-8d15-3ee6acd4bc21,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47ab4db0910f1161c48403789fa2bb636fdf14b1a00fc4cc2715b5be433b359,PodSandboxId:046de86cd0269526d2bdeaeabd03b0a6172c288596ef0ef1b0aaa370497af5cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761605027700245398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: d30e569ea28242067b1c91b56aeaef54,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3c591dcf0f347225db91ba714c06fb23985e89a29f808785ecf8ebe1f2f6d,PodSandboxId:331d890c73f749e99ddcc8c4e690fe764f07e708cb779cb7adb4f7c495314619,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761605027708698826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 78ff4c98bdb3caeac097dee4dfe1cff1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998f84f0caed6d06eaf012fefbe6b20cadc598c81f3bfe87cfa8b5cfb6797253,PodSandboxId:990ebe176c815c963916952349371fea5403bdd6190c4b3930a1dc96faf240b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761605027684486106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac585d648eb07d4d3c204303149adc26,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f26ea2d274a397b406640c7c0236e56199a7f57e309929c1828a8d426f58d11,PodSandboxId:193a9fa8afb5dde53559126252223720b12ce4b751e1b0de86e49c0f570fea9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761605027644029646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764084,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607702b32a619754a10f2fd77ad89cc3,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81050f32-e0d1-4ff9-8b0c-15933dbf0fc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f57a13696534e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   a6a3de827935f       coredns-668d6bf9bc-tvfnf
	92ffdda6d0bed       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   23e0a62c4e1e9       kube-proxy-7q9vz
	9cb213bbc0499       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   afde189dd9620       storage-provisioner
	3fe3c591dcf0f       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   331d890c73f74       kube-scheduler-test-preload-764084
	e47ab4db0910f       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   046de86cd0269       kube-controller-manager-test-preload-764084
	998f84f0caed6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   990ebe176c815       etcd-test-preload-764084
	1f26ea2d274a3       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   193a9fa8afb5d       kube-apiserver-test-preload-764084
	
	
	==> coredns [f57a13696534e57ecb41cf63abc0c84fa8771d122c99019855a6332cd0678bb0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55086 - 55624 "HINFO IN 994586959539720736.9168469197613815330. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.069807917s
	
	
	==> describe nodes <==
	Name:               test-preload-764084
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-764084
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=test-preload-764084
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:42:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-764084
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:44:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:44:01 +0000   Mon, 27 Oct 2025 22:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:44:01 +0000   Mon, 27 Oct 2025 22:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:44:01 +0000   Mon, 27 Oct 2025 22:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:44:01 +0000   Mon, 27 Oct 2025 22:44:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    test-preload-764084
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7b9ec5d994641fab9934a0d3ad6b1e1
	  System UUID:                d7b9ec5d-9946-41fa-b993-4a0d3ad6b1e1
	  Boot ID:                    b6091128-7d1b-4f59-8db1-20e76e2f555f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-tvfnf                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     98s
	  kube-system                 etcd-test-preload-764084                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         102s
	  kube-system                 kube-apiserver-test-preload-764084             250m (12%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-test-preload-764084    200m (10%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-7q9vz                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-test-preload-764084             100m (5%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 95s                  kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   Starting                 109s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node test-preload-764084 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node test-preload-764084 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node test-preload-764084 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    102s                 kubelet          Node test-preload-764084 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  102s                 kubelet          Node test-preload-764084 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     102s                 kubelet          Node test-preload-764084 status is now: NodeHasSufficientPID
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeReady                101s                 kubelet          Node test-preload-764084 status is now: NodeReady
	  Normal   RegisteredNode           99s                  node-controller  Node test-preload-764084 event: Registered Node test-preload-764084 in Controller
	  Normal   Starting                 21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-764084 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-764084 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-764084 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                  kubelet          Node test-preload-764084 has been rebooted, boot id: b6091128-7d1b-4f59-8db1-20e76e2f555f
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-764084 event: Registered Node test-preload-764084 in Controller
	
	
	==> dmesg <==
	[Oct27 22:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004139] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.016078] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084756] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.115068] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.675878] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000126] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [998f84f0caed6d06eaf012fefbe6b20cadc598c81f3bfe87cfa8b5cfb6797253] <==
	{"level":"info","ts":"2025-10-27T22:43:48.196494Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T22:43:48.188484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2025-10-27T22:43:48.199004Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2025-10-27T22:43:48.199041Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2025-10-27T22:43:48.199154Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2025-10-27T22:43:48.201926Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T22:43:48.201993Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T22:43:48.202474Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:43:48.202509Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:43:49.651270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T22:43:49.651334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T22:43:49.651371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2025-10-27T22:43:49.651384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T22:43:49.651389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2025-10-27T22:43:49.651397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2025-10-27T22:43:49.651404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2025-10-27T22:43:49.658315Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:test-preload-764084 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T22:43:49.658498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:43:49.658678Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:43:49.659478Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-27T22:43:49.659637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T22:43:49.659654Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T22:43:49.660125Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-27T22:43:49.660227Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T22:43:49.660946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	
	
	==> kernel <==
	 22:44:07 up 0 min,  0 users,  load average: 1.70, 0.46, 0.16
	Linux test-preload-764084 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Oct 25 21:00:46 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1f26ea2d274a397b406640c7c0236e56199a7f57e309929c1828a8d426f58d11] <==
	I1027 22:43:51.066526       1 policy_source.go:240] refreshing policies
	I1027 22:43:51.071301       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 22:43:51.071348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 22:43:51.071893       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:43:51.074887       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1027 22:43:51.074935       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:43:51.074934       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:43:51.075171       1 shared_informer.go:320] Caches are synced for configmaps
	I1027 22:43:51.080100       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:43:51.081657       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1027 22:43:51.082355       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:43:51.082526       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:43:51.082578       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:43:51.082601       1 cache.go:39] Caches are synced for autoregister controller
	E1027 22:43:51.083345       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 22:43:51.115265       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:43:51.717471       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1027 22:43:51.876924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:43:53.012463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1027 22:43:53.077381       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1027 22:43:53.138499       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:43:53.156147       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:43:54.306636       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1027 22:43:54.554300       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:43:54.660190       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e47ab4db0910f1161c48403789fa2bb636fdf14b1a00fc4cc2715b5be433b359] <==
	I1027 22:43:54.308500       1 shared_informer.go:320] Caches are synced for stateful set
	I1027 22:43:54.312832       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-764084"
	I1027 22:43:54.313917       1 shared_informer.go:320] Caches are synced for HPA
	I1027 22:43:54.321354       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1027 22:43:54.324030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.962801ms"
	I1027 22:43:54.324513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.366µs"
	I1027 22:43:54.329928       1 shared_informer.go:320] Caches are synced for daemon sets
	I1027 22:43:54.336849       1 shared_informer.go:320] Caches are synced for disruption
	I1027 22:43:54.346248       1 shared_informer.go:320] Caches are synced for garbage collector
	I1027 22:43:54.351788       1 shared_informer.go:320] Caches are synced for ephemeral
	I1027 22:43:54.351928       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1027 22:43:54.354531       1 shared_informer.go:320] Caches are synced for deployment
	I1027 22:43:54.355829       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1027 22:43:54.355912       1 shared_informer.go:320] Caches are synced for attach detach
	I1027 22:43:54.356207       1 shared_informer.go:320] Caches are synced for endpoint
	I1027 22:43:54.356266       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1027 22:43:54.380126       1 shared_informer.go:320] Caches are synced for garbage collector
	I1027 22:43:54.380173       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:43:54.380185       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:43:59.928837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.706µs"
	I1027 22:43:59.963690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.718293ms"
	I1027 22:43:59.964452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="666.92µs"
	I1027 22:44:01.563892       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-764084"
	I1027 22:44:01.579630       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-764084"
	I1027 22:44:04.283941       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [92ffdda6d0bed0b732511a785c753771561b74d6c9b08e5db6edd5164939496d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1027 22:43:52.601644       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1027 22:43:52.611832       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	E1027 22:43:52.611970       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:43:52.657700       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1027 22:43:52.657923       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 22:43:52.657966       1 server_linux.go:170] "Using iptables Proxier"
	I1027 22:43:52.661256       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:43:52.661597       1 server.go:497] "Version info" version="v1.32.0"
	I1027 22:43:52.661631       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:43:52.663582       1 config.go:199] "Starting service config controller"
	I1027 22:43:52.663637       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1027 22:43:52.663670       1 config.go:105] "Starting endpoint slice config controller"
	I1027 22:43:52.663674       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1027 22:43:52.664501       1 config.go:329] "Starting node config controller"
	I1027 22:43:52.664534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1027 22:43:52.763843       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1027 22:43:52.763906       1 shared_informer.go:320] Caches are synced for service config
	I1027 22:43:52.765628       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fe3c591dcf0f347225db91ba714c06fb23985e89a29f808785ecf8ebe1f2f6d] <==
	I1027 22:43:48.913869       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:43:50.974336       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:43:50.974715       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:43:50.975843       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:43:50.975911       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:43:51.023697       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1027 22:43:51.023815       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:43:51.026533       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:43:51.026585       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1027 22:43:51.026609       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 22:43:51.026656       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:43:51.126830       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.173278    1169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-764084"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.186538    1169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-764084\" already exists" pod="kube-system/kube-controller-manager-test-preload-764084"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.505157    1169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-764084"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.516349    1169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-764084\" already exists" pod="kube-system/kube-controller-manager-test-preload-764084"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.646344    1169 apiserver.go:52] "Watching apiserver"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.652469    1169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-tvfnf" podUID="fd5ec91c-86a7-451a-a600-661d856d315f"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.669202    1169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.710934    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb4cb03f-4d61-4d97-800e-4a06a0e81220-xtables-lock\") pod \"kube-proxy-7q9vz\" (UID: \"eb4cb03f-4d61-4d97-800e-4a06a0e81220\") " pod="kube-system/kube-proxy-7q9vz"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.711033    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb4cb03f-4d61-4d97-800e-4a06a0e81220-lib-modules\") pod \"kube-proxy-7q9vz\" (UID: \"eb4cb03f-4d61-4d97-800e-4a06a0e81220\") " pod="kube-system/kube-proxy-7q9vz"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: I1027 22:43:51.711089    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/084255c5-a7b3-4db0-8d15-3ee6acd4bc21-tmp\") pod \"storage-provisioner\" (UID: \"084255c5-a7b3-4db0-8d15-3ee6acd4bc21\") " pod="kube-system/storage-provisioner"
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.712463    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.712601    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume podName:fd5ec91c-86a7-451a-a600-661d856d315f nodeName:}" failed. No retries permitted until 2025-10-27 22:43:52.212577273 +0000 UTC m=+5.673914793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume") pod "coredns-668d6bf9bc-tvfnf" (UID: "fd5ec91c-86a7-451a-a600-661d856d315f") : object "kube-system"/"coredns" not registered
	Oct 27 22:43:51 test-preload-764084 kubelet[1169]: E1027 22:43:51.760402    1169 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 27 22:43:52 test-preload-764084 kubelet[1169]: E1027 22:43:52.215124    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 22:43:52 test-preload-764084 kubelet[1169]: E1027 22:43:52.215216    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume podName:fd5ec91c-86a7-451a-a600-661d856d315f nodeName:}" failed. No retries permitted until 2025-10-27 22:43:53.215202782 +0000 UTC m=+6.676540289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume") pod "coredns-668d6bf9bc-tvfnf" (UID: "fd5ec91c-86a7-451a-a600-661d856d315f") : object "kube-system"/"coredns" not registered
	Oct 27 22:43:53 test-preload-764084 kubelet[1169]: E1027 22:43:53.229492    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 22:43:53 test-preload-764084 kubelet[1169]: E1027 22:43:53.229562    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume podName:fd5ec91c-86a7-451a-a600-661d856d315f nodeName:}" failed. No retries permitted until 2025-10-27 22:43:55.229549236 +0000 UTC m=+8.690886756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume") pod "coredns-668d6bf9bc-tvfnf" (UID: "fd5ec91c-86a7-451a-a600-661d856d315f") : object "kube-system"/"coredns" not registered
	Oct 27 22:43:53 test-preload-764084 kubelet[1169]: E1027 22:43:53.694834    1169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-tvfnf" podUID="fd5ec91c-86a7-451a-a600-661d856d315f"
	Oct 27 22:43:55 test-preload-764084 kubelet[1169]: E1027 22:43:55.246977    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 22:43:55 test-preload-764084 kubelet[1169]: E1027 22:43:55.247062    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume podName:fd5ec91c-86a7-451a-a600-661d856d315f nodeName:}" failed. No retries permitted until 2025-10-27 22:43:59.24704836 +0000 UTC m=+12.708385879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd5ec91c-86a7-451a-a600-661d856d315f-config-volume") pod "coredns-668d6bf9bc-tvfnf" (UID: "fd5ec91c-86a7-451a-a600-661d856d315f") : object "kube-system"/"coredns" not registered
	Oct 27 22:43:55 test-preload-764084 kubelet[1169]: E1027 22:43:55.693342    1169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-tvfnf" podUID="fd5ec91c-86a7-451a-a600-661d856d315f"
	Oct 27 22:43:56 test-preload-764084 kubelet[1169]: E1027 22:43:56.762299    1169 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605036761670925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 22:43:56 test-preload-764084 kubelet[1169]: E1027 22:43:56.762324    1169 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605036761670925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 22:44:06 test-preload-764084 kubelet[1169]: E1027 22:44:06.765146    1169 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605046763909717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 22:44:06 test-preload-764084 kubelet[1169]: E1027 22:44:06.765876    1169 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605046763909717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9cb213bbc0499a6b58f71787e71bc3ad55508fb29120426d41b28eaeb9dda6a3] <==
	I1027 22:43:52.433475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-764084 -n test-preload-764084
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-764084 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-764084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-764084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-764084: (1.019148956s)
--- FAIL: TestPreload (157.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (935.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.988070059s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-216520
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-216520: (2.202393258s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-216520 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-216520 status --format={{.Host}}: exit status 7 (81.19548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.949750351s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-216520 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.913092ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-216520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-216520
	    minikube start -p kubernetes-upgrade-216520 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2165202 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-216520 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80 (13m51.170492648s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-216520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-216520" primary control-plane node in "kubernetes-upgrade-216520" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:51:43.499123  387237 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:51:43.499415  387237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:51:43.499427  387237 out.go:374] Setting ErrFile to fd 2...
	I1027 22:51:43.499432  387237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:51:43.499664  387237 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:51:43.500223  387237 out.go:368] Setting JSON to false
	I1027 22:51:43.501310  387237 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9251,"bootTime":1761596253,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:51:43.501414  387237 start.go:143] virtualization: kvm guest
	I1027 22:51:43.503655  387237 out.go:179] * [kubernetes-upgrade-216520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:51:43.505036  387237 notify.go:221] Checking for updates...
	I1027 22:51:43.505050  387237 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:51:43.506668  387237 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:51:43.508033  387237 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:51:43.509438  387237 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:51:43.510616  387237 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:51:43.511910  387237 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:51:43.513613  387237 config.go:182] Loaded profile config "kubernetes-upgrade-216520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:51:43.514099  387237 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:51:43.556628  387237 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 22:51:43.557929  387237 start.go:307] selected driver: kvm2
	I1027 22:51:43.557951  387237 start.go:928] validating driver "kvm2" against &{Name:kubernetes-upgrade-216520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-216520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:51:43.558090  387237 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:51:43.559222  387237 cni.go:84] Creating CNI manager for ""
	I1027 22:51:43.559295  387237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:51:43.559352  387237 start.go:351] cluster config:
	{Name:kubernetes-upgrade-216520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-216520 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:51:43.559471  387237 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:51:43.560998  387237 out.go:179] * Starting "kubernetes-upgrade-216520" primary control-plane node in "kubernetes-upgrade-216520" cluster
	I1027 22:51:43.562161  387237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:51:43.562205  387237 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:51:43.562215  387237 cache.go:59] Caching tarball of preloaded images
	I1027 22:51:43.562316  387237 preload.go:233] Found /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:51:43.562327  387237 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:51:43.562417  387237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/config.json ...
	I1027 22:51:43.562610  387237 start.go:360] acquireMachinesLock for kubernetes-upgrade-216520: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 22:51:43.562693  387237 start.go:364] duration metric: took 62.651µs to acquireMachinesLock for "kubernetes-upgrade-216520"
	I1027 22:51:43.562709  387237 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:51:43.562715  387237 fix.go:55] fixHost starting: 
	I1027 22:51:43.564710  387237 fix.go:113] recreateIfNeeded on kubernetes-upgrade-216520: state=Running err=<nil>
	W1027 22:51:43.564733  387237 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:51:43.566450  387237 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-216520" VM ...
	I1027 22:51:43.566491  387237 machine.go:94] provisionDockerMachine start ...
	I1027 22:51:43.569400  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.569884  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:43.569924  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.570133  387237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.570409  387237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I1027 22:51:43.570423  387237 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:51:43.695506  387237 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-216520
	
	I1027 22:51:43.695545  387237 buildroot.go:166] provisioning hostname "kubernetes-upgrade-216520"
	I1027 22:51:43.699813  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.700591  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:43.700637  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.700980  387237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.701286  387237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I1027 22:51:43.701325  387237 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-216520 && echo "kubernetes-upgrade-216520" | sudo tee /etc/hostname
	I1027 22:51:43.846388  387237 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-216520
	
	I1027 22:51:43.849282  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.849678  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:43.849706  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.849961  387237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:43.850245  387237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I1027 22:51:43.850276  387237 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-216520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-216520/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-216520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:51:43.968174  387237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:51:43.968213  387237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 22:51:43.968272  387237 buildroot.go:174] setting up certificates
	I1027 22:51:43.968294  387237 provision.go:84] configureAuth start
	I1027 22:51:43.971752  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.972372  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:43.972412  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.975771  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.976264  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:43.976295  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:43.976492  387237 provision.go:143] copyHostCerts
	I1027 22:51:43.976563  387237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem, removing ...
	I1027 22:51:43.976612  387237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem
	I1027 22:51:43.976713  387237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 22:51:43.976909  387237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem, removing ...
	I1027 22:51:43.976932  387237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem
	I1027 22:51:43.976996  387237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 22:51:43.977097  387237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem, removing ...
	I1027 22:51:43.977108  387237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem
	I1027 22:51:43.977146  387237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 22:51:43.977226  387237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-216520 san=[127.0.0.1 192.168.61.85 kubernetes-upgrade-216520 localhost minikube]
	I1027 22:51:44.225408  387237 provision.go:177] copyRemoteCerts
	I1027 22:51:44.225484  387237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:51:44.228942  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:44.229622  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:44.229664  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:44.229853  387237 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/kubernetes-upgrade-216520/id_rsa Username:docker}
	I1027 22:51:44.326551  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:51:44.372065  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 22:51:44.415014  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:51:44.460499  387237 provision.go:87] duration metric: took 492.181949ms to configureAuth
	I1027 22:51:44.460534  387237 buildroot.go:189] setting minikube options for container-runtime
	I1027 22:51:44.460803  387237 config.go:182] Loaded profile config "kubernetes-upgrade-216520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:51:44.463695  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:44.464219  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:44.464247  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:44.464423  387237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:44.464657  387237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I1027 22:51:44.464674  387237 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:51:45.300873  387237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:51:45.300927  387237 machine.go:97] duration metric: took 1.734423993s to provisionDockerMachine
	I1027 22:51:45.300942  387237 start.go:293] postStartSetup for "kubernetes-upgrade-216520" (driver="kvm2")
	I1027 22:51:45.300955  387237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:51:45.301035  387237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:51:45.304663  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.305201  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:45.305231  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.305483  387237 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/kubernetes-upgrade-216520/id_rsa Username:docker}
	I1027 22:51:45.399255  387237 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:51:45.405973  387237 info.go:137] Remote host: Buildroot 2025.02
	I1027 22:51:45.406008  387237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 22:51:45.406099  387237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 22:51:45.406201  387237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem -> 3566212.pem in /etc/ssl/certs
	I1027 22:51:45.406324  387237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:51:45.424697  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:51:45.475983  387237 start.go:296] duration metric: took 175.022901ms for postStartSetup
	I1027 22:51:45.476043  387237 fix.go:57] duration metric: took 1.913325451s for fixHost
	I1027 22:51:45.480280  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.480972  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:45.481016  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.481401  387237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:51:45.481735  387237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I1027 22:51:45.481760  387237 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 22:51:45.631029  387237 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761605505.620320999
	
	I1027 22:51:45.631062  387237 fix.go:217] guest clock: 1761605505.620320999
	I1027 22:51:45.631075  387237 fix.go:230] Guest: 2025-10-27 22:51:45.620320999 +0000 UTC Remote: 2025-10-27 22:51:45.476049856 +0000 UTC m=+2.033943657 (delta=144.271143ms)
	I1027 22:51:45.631100  387237 fix.go:201] guest clock delta is within tolerance: 144.271143ms
	I1027 22:51:45.631109  387237 start.go:83] releasing machines lock for "kubernetes-upgrade-216520", held for 2.068404524s
	I1027 22:51:45.634941  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.635470  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:45.635510  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.637137  387237 ssh_runner.go:195] Run: cat /version.json
	I1027 22:51:45.637244  387237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:51:45.641683  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.641787  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.642332  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:45.642387  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.642487  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:51:45.642527  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:51:45.642724  387237 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/kubernetes-upgrade-216520/id_rsa Username:docker}
	I1027 22:51:45.642933  387237 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/kubernetes-upgrade-216520/id_rsa Username:docker}
	I1027 22:51:45.848636  387237 ssh_runner.go:195] Run: systemctl --version
	I1027 22:51:45.867421  387237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:51:46.113664  387237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:51:46.138375  387237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:51:46.138467  387237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:51:46.176634  387237 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:51:46.176895  387237 start.go:496] detecting cgroup driver to use...
	I1027 22:51:46.176995  387237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:51:46.240900  387237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:51:46.315899  387237 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:51:46.315976  387237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:51:46.372542  387237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:51:46.421145  387237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:51:46.867779  387237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:51:47.157551  387237 docker.go:234] disabling docker service ...
	I1027 22:51:47.157643  387237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:51:47.194618  387237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:51:47.229556  387237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:51:47.504929  387237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:51:47.748600  387237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:51:47.768070  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:51:47.816807  387237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:51:47.816917  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.838640  387237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:51:47.838734  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.859940  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.885572  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.905164  387237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:51:47.928603  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.948490  387237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.977507  387237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:51:47.997612  387237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:51:48.015486  387237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:51:48.038848  387237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:51:48.283865  387237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:53:18.696991  387237 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.413060864s)
	I1027 22:53:18.697036  387237 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:53:18.697101  387237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:53:18.704722  387237 start.go:564] Will wait 60s for crictl version
	I1027 22:53:18.704807  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:53:18.710290  387237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 22:53:18.757637  387237 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 22:53:18.757757  387237 ssh_runner.go:195] Run: crio --version
	I1027 22:53:18.791167  387237 ssh_runner.go:195] Run: crio --version
	I1027 22:53:18.829428  387237 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 22:53:18.834458  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:53:18.834990  387237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:3a:1d", ip: ""} in network mk-kubernetes-upgrade-216520: {Iface:virbr3 ExpiryTime:2025-10-27 23:51:20 +0000 UTC Type:0 Mac:52:54:00:f2:3a:1d Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:kubernetes-upgrade-216520 Clientid:01:52:54:00:f2:3a:1d}
	I1027 22:53:18.835017  387237 main.go:143] libmachine: domain kubernetes-upgrade-216520 has defined IP address 192.168.61.85 and MAC address 52:54:00:f2:3a:1d in network mk-kubernetes-upgrade-216520
	I1027 22:53:18.835305  387237 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1027 22:53:18.840923  387237 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-216520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-216520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:53:18.841087  387237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:53:18.841166  387237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:53:18.905100  387237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:53:18.905126  387237 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:53:18.905177  387237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:53:18.947223  387237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:53:18.947261  387237 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:53:18.947275  387237 kubeadm.go:935] updating node { 192.168.61.85 8443 v1.34.1 crio true true} ...
	I1027 22:53:18.947451  387237 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-216520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-216520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:53:18.947560  387237 ssh_runner.go:195] Run: crio config
	I1027 22:53:18.997491  387237 cni.go:84] Creating CNI manager for ""
	I1027 22:53:18.997529  387237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:53:18.997562  387237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:53:18.997594  387237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.85 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-216520 NodeName:kubernetes-upgrade-216520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:53:18.997804  387237 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-216520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.85"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.85"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:53:18.997942  387237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:53:19.015407  387237 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:53:19.015505  387237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:53:19.029187  387237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1027 22:53:19.056175  387237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:53:19.079929  387237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1027 22:53:19.103593  387237 ssh_runner.go:195] Run: grep 192.168.61.85	control-plane.minikube.internal$ /etc/hosts
	I1027 22:53:19.108657  387237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:53:19.273934  387237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:53:19.292559  387237 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520 for IP: 192.168.61.85
	I1027 22:53:19.292584  387237 certs.go:195] generating shared ca certs ...
	I1027 22:53:19.292600  387237 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:53:19.292768  387237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 22:53:19.292827  387237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 22:53:19.292845  387237 certs.go:257] generating profile certs ...
	I1027 22:53:19.292979  387237 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/client.key
	I1027 22:53:19.293032  387237 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/apiserver.key.16dd1fbb
	I1027 22:53:19.293071  387237 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/proxy-client.key
	I1027 22:53:19.293190  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 22:53:19.293238  387237 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 22:53:19.293246  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:53:19.293278  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:53:19.293300  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:53:19.293335  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 22:53:19.293373  387237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:53:19.296073  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:53:19.334140  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:53:19.371817  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:53:19.411664  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 22:53:19.454375  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1027 22:53:19.491057  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:53:19.524875  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:53:19.562795  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:53:19.604026  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 22:53:19.639859  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 22:53:19.678688  387237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:53:19.715700  387237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:53:19.747327  387237 ssh_runner.go:195] Run: openssl version
	I1027 22:53:19.758797  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 22:53:19.779793  387237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 22:53:19.787534  387237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 22:53:19.787625  387237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 22:53:19.797516  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:53:19.816045  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:53:19.833526  387237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:53:19.840405  387237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:53:19.840493  387237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:53:19.851283  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:53:19.865100  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 22:53:19.884374  387237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 22:53:19.891624  387237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 22:53:19.891696  387237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 22:53:19.902917  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 22:53:19.922136  387237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:53:19.928654  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:53:19.937739  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:53:19.949151  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:53:19.960725  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:53:19.969292  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:53:19.978263  387237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:53:19.986633  387237 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-216520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-216520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:53:19.986764  387237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:53:19.986975  387237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:53:20.040007  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:53:20.040036  387237 cri.go:89] found id: "784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a"
	I1027 22:53:20.040040  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:53:20.040044  387237 cri.go:89] found id: "fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95"
	I1027 22:53:20.040047  387237 cri.go:89] found id: "7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de"
	I1027 22:53:20.040050  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:53:20.040053  387237 cri.go:89] found id: ""
	I1027 22:53:20.040122  387237 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:53:20.081744  387237 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969/userdata","rootfs":"/var/lib/containers/storage/overlay/a791e5474d5bb2d8418a08c3cae6ba1a73572a8a0bdf3cec4d299afe5249815b/merged","created":"2025-10-27T22:51:45.894776663Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"c4949aab1d2885e95c9ca3a2ce576786\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315626151Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podc4949aab1d2885e95c9ca3a2ce576786","io.kubernetes.cri-o.ContainerID":"102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969","io.kubernetes.cri-o.ContainerName":"
k8s_POD_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:45.737577979Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4949aab1d2885e95c9ca3a2ce576786\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-216520\"}","io.kubernetes.cri-o.LogPath":"/var/lo
g/pods/kube-system_kube-scheduler-kubernetes-upgrade-216520_c4949aab1d2885e95c9ca3a2ce576786/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-216520\",\"uid\":\"c4949aab1d2885e95c9ca3a2ce576786\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a791e5474d5bb2d8418a08c3cae6ba1a73572a8a0bdf3cec4d299afe5249815b/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10259,\"ContainerPort\":10259,\"Protocol\":\"TCP\",\"
HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.hash":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.seen":"2025-10-27T22:51:35.315626151Z","kubernetes.io/conf
ig.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e/userdata","rootfs":"/var/lib/containers/storage/overlay/7cddcf1ebda15221485f4881232ab42d7936a88cb75ae7bec4a722d42432a636/merged","created":"2025-10-27T22:51:45.805607132Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"fb366827ab7b32d13cb327d8b8d99103\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315624784Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podfb366827ab7b32d13cb327d8b8d99103","io.kubernetes.cri-o.ContainerID":"270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e","io.kubernetes.cri-o.Containe
rName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:45.678044978Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb366827ab7b32d13cb327d8b8d99103\",\"io.kubernetes.container.name\":\"PO
D\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"uid\":\"fb366827ab7b32d13cb327d8b8d99103\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7cddcf1ebda15221485f4881232ab42d7936a88cb75ae7bec4a722d42432a636/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings
":"[{\"HostPort\":10257,\"ContainerPort\":10257,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fb366827ab7b32d13cb327d8b8d99103","kubernetes.io/config.hash":"fb366827ab7b32d13cb327d8b
8d99103","kubernetes.io/config.seen":"2025-10-27T22:51:35.315624784Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236/userdata","rootfs":"/var/lib/containers/storage/overlay/cb8180d5e345100a2abca3ca67ab6c7d1a00e1189908aca4ea772d0c7ffcbede/merged","created":"2025-10-27T22:51:36.050084475Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315624784Z\",\"kubernetes.io/config.hash\":\"fb366827ab7b32d13cb327d8b8d99103\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podfb366827ab7b32d13cb327d8b8d99103","io.kubernetes.cri-o.ContainerID":"3741bb
d3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:35.931592038Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb366827ab7b32d13cb327d8b8d99103\",\"io.kubernetes.c
ontainer.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"uid\":\"fb366827ab7b32d13cb327d8b8d99103\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cb8180d5e345100a2abca3ca67ab6c7d1a00e1189908aca4ea772d0c7ffcbede/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\
"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10257,\"ContainerPort\":10257,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fb366827ab7b3
2d13cb327d8b8d99103","kubernetes.io/config.hash":"fb366827ab7b32d13cb327d8b8d99103","kubernetes.io/config.seen":"2025-10-27T22:51:35.315624784Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3/userdata","rootfs":"/var/lib/containers/storage/overlay/dc3eea06cf5938bf612f85bf3ec64772b936b757e3560ae848705481ef648a44/merged","created":"2025-10-27T22:51:45.854273874Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.322253451Z\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.61.85:2379\",\"kubernetes.io/config.hash\":\"0b2f7b30e945705567d89722fabeeb5
8\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod0b2f7b30e945705567d89722fabeeb58","io.kubernetes.cri-o.ContainerID":"568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:45.686284098Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"0b2f7b30e945705567d89722fabeeb58\",\"io.kubernetes.container.name\":\"POD\",\"c
omponent\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-216520_0b2f7b30e945705567d89722fabeeb58/568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-216520\",\"uid\":\"0b2f7b30e945705567d89722fabeeb58\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dc3eea06cf5938bf612f85bf3ec64772b936b757e3560ae848705481ef648a44/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000
,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":2381,\"ContainerPort\":2381,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0b2f7b30e945705567d89722fabeeb58
","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.61.85:2379","kubernetes.io/config.hash":"0b2f7b30e945705567d89722fabeeb58","kubernetes.io/config.seen":"2025-10-27T22:51:35.322253451Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3/userdata","rootfs":"/var/lib/containers/storage/overlay/1eb817330314f9d3618ec9c61fd2ad389b075fb277b49cd1f917a83a047dacde/merged","created":"2025-10-27T22:51:46.415728652Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"af42bbeb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.
terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"af42bbeb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:46.211589412Z","io.kubernetes.cri-o.Image":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri-o.ImageRef":"7dd6aaa1717ab7eaae4578503e4c4d99
65fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4949aab1d2885e95c9ca3a2ce576786\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-216520_c4949aab1d2885e95c9ca3a2ce576786/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1eb817330314f9d3618ec9c61fd2ad389b075fb277b49cd1f917a83a047dacde/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969/userdata/resolv.conf",
"io.kubernetes.cri-o.SandboxID":"102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4949aab1d2885e95c9ca3a2ce576786/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4949aab1d2885e95c9ca3a2ce576786/containers/kube-scheduler/24a78af3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kub
e-scheduler-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.hash":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.seen":"2025-10-27T22:51:35.315626151Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de/userdata","rootfs":"/var/lib/containers/storage/overlay/5d5f85a42466bc037c72c442218e31222b62f68823bbd32e9a561e9539492cf4/merged","created":"2025-10-27T22:51:36.329444878Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c112505","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":1025
7,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c112505\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:36.197614755Z","io.kubernetes.cri-o.Image":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.ImageName":"registry.k
8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri-o.ImageRef":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb366827ab7b32d13cb327d8b8d99103\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5d5f85a42466bc037c72c442218e31222b62f68823bbd32e9a561e9539492cf4/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.c
ri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb366827ab7b32d13cb327d8b8d99103/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb366827ab7b32d13cb327d8b8d99103/containers/kube-controller-manager/04f837e2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/s
sl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb366827ab7b32
d13cb327d8b8d99103","kubernetes.io/config.hash":"fb366827ab7b32d13cb327d8b8d99103","kubernetes.io/config.seen":"2025-10-27T22:51:35.315624784Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab/userdata","rootfs":"/var/lib/containers/storage/overlay/b01086d787a2cb709fbc42e8655594a861e7bdb796a37906e4d3450e469bb8ee/merged","created":"2025-10-27T22:51:36.079266511Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315626151Z\",\"kubernetes.io/config.hash\":\"c4949aab1d2885e95c9ca3a2ce576786\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podc4949aab1d2885e95c9ca3a2ce576786"
,"io.kubernetes.cri-o.ContainerID":"77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:35.943749012Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"c4949aab1d2885e95c9ca3a2ce576786\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade
-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-216520_c4949aab1d2885e95c9ca3a2ce576786/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-216520\",\"uid\":\"c4949aab1d2885e95c9ca3a2ce576786\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b01086d787a2cb709fbc42e8655594a861e7bdb796a37906e4d3450e469bb8ee/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.
oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":10259,\"ContainerPort\":10259,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/con
fig.hash":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.seen":"2025-10-27T22:51:35.315626151Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a/userdata","rootfs":"/var/lib/containers/storage/overlay/86908d472e2be49197b847a6a2b990d1824d54ff7ede17c244d5c47e0666afbf/merged","created":"2025-10-27T22:51:46.1142026Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c112505","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePol
icy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c112505\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:46.006530749Z","io.kubernetes.cri-o.Image":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri-o.ImageRef":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.con
tainer.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fb366827ab7b32d13cb327d8b8d99103\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/86908d472e2be49197b847a6a2b990d1824d54ff7ede17c244d5c47e0666afbf/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e/userdata/resolv.conf","io.kubernetes.cri-o.
SandboxID":"270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-216520_kube-system_fb366827ab7b32d13cb327d8b8d99103_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fb366827ab7b32d13cb327d8b8d99103/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fb366827ab7b32d13cb327d8b8d99103/containers/kube-controller-manager/fa0b8e42\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"
host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fb366827ab7b32d13cb327d8b8d99103","kubernetes.io/config.hash":"fb366827ab7b32d13cb327d8b8d99103","kubernetes.io/config.seen":"2025-10-27T22:51:35.315624784Z","kubernetes.io/config.source"
:"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53/userdata","rootfs":"/var/lib/containers/storage/overlay/190c447599c80d8bc4aff796ff1a1d49086fb1c6e120cb4189fc6dee43ca79d1/merged","created":"2025-10-27T22:51:36.034945881Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315619528Z\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.61.85:8443\",\"kubernetes.io/config.hash\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podfe1eaa028fcec4b5ffbcd2010eb65da7","io.kubernetes.cri-o.ContainerID":"7a1287582ea113fc5513ffe867387a5de43820e7d49ca9
41b480c24cdc126a53","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:35.914250429Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"
kube-apiserver\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-216520_fe1eaa028fcec4b5ffbcd2010eb65da7/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-216520\",\"uid\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/190c447599c80d8bc4aff796ff1a1d49086fb1c6e120cb4189fc6dee43ca79d1/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":8443,\"Con
tainerPort\":8443,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.61.85:8443","kubernetes.io/
config.hash":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubernetes.io/config.seen":"2025-10-27T22:51:35.315619528Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69/userdata","rootfs":"/var/lib/containers/storage/overlay/83a3280b4327002b0821a3c75fa1d314d9be2aff29c83c0066ee957a94be8861/merged","created":"2025-10-27T22:51:45.843219476Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.315619528Z\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.61.85:8443\",\"kubernetes.io/config.hash\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\"}","io.kubernete
s.cri-o.CgroupParent":"/kubepods/burstable/podfe1eaa028fcec4b5ffbcd2010eb65da7","io.kubernetes.cri-o.ContainerID":"8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:45.685654345Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-216520","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\",\"tier\":\"control-plane\",\"component\":\"
kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-216520_fe1eaa028fcec4b5ffbcd2010eb65da7/8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-216520\",\"uid\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/83a3280b4327002b0821a3c75fa1d314d9be2aff29c83c0066ee957a94be8861/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernet
es.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":8443,\"ContainerPort\":8443,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":
"kube-system","io.kubernetes.pod.uid":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.61.85:8443","kubernetes.io/config.hash":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubernetes.io/config.seen":"2025-10-27T22:51:35.315619528Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf/userdata","rootfs":"/var/lib/containers/storage/overlay/b45459eb878d1d33c0992f87556fcc098b0ae7777a9ae76d384c60c00e853c61/merged","created":"2025-10-27T22:51:36.093085966Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.seen\":\"2025-10-27T22:51:35.322253451Z\",\"kubead
m.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.61.85:2379\",\"kubernetes.io/config.hash\":\"0b2f7b30e945705567d89722fabeeb58\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod0b2f7b30e945705567d89722fabeeb58","io.kubernetes.cri-o.ContainerID":"d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-10-27T22:51:35.944608635Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-216520","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-216520"
,"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0b2f7b30e945705567d89722fabeeb58\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-216520_0b2f7b30e945705567d89722fabeeb58/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-216520\",\"uid\":\"0b2f7b30e945705567d89722fabeeb58\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b45459eb878d1d33c0992f87556fcc098b0ae7777a9ae76d384c60c00e853c61/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1,\"userns_options
\":{\"mode\":2}}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[{\"HostPort\":2381,\"ContainerPort\":2381,\"Protocol\":\"TCP\",\"HostIP\":\"\"}]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf/userdata/shm","io.kubernetes.pod.name":"etcd-kub
ernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0b2f7b30e945705567d89722fabeeb58","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.61.85:2379","kubernetes.io/config.hash":"0b2f7b30e945705567d89722fabeeb58","kubernetes.io/config.seen":"2025-10-27T22:51:35.322253451Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.1","id":"e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b/userdata","rootfs":"/var/lib/containers/storage/overlay/af68647338a1c96e691a209e7b1f1f65d10df3cfa9b9bf5e2fc38dd34d2d6e1a/merged","created":"2025-10-27T22:51:36.286899198Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d0cc63c7","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort
\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d0cc63c7\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:36.172025107Z","io.kubernetes.cri-o.Image":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","io.kubernetes.c
ri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri-o.ImageRef":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe1eaa028fcec4b5ffbcd2010eb65da7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-216520_fe1eaa028fcec4b5ffbcd2010eb65da7/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/af68647338a1c96e691a209e7b1f1f65d10df3cfa9b9bf5e2fc38dd34d2d6e1a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage
/overlay-containers/7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fe1eaa028fcec4b5ffbcd2010eb65da7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fe1eaa028fcec4b5ffbcd2010eb65da7/containers/kube-apiserver/ca87c729\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.61.85:8443","kubernetes.io/config.hash":"fe1eaa028fcec4b5ffbcd2010eb65da7","kubernetes.io/config.seen":"2025-10-27T22:51:35.315619528Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.1","id":"f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f6603df9
bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3/userdata","rootfs":"/var/lib/containers/storage/overlay/9f6355d14c3df2eafe5a633bdc504035ae84b38cd16182ff6c6e84aa320197f9/merged","created":"2025-10-27T22:51:36.39875112Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.
container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:36.274556804Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0b2f7b30e945705567d89722fabeeb58\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-216520_0b2f7b30e945705567d89722fabeeb58/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.M
ountPoint":"/var/lib/containers/storage/overlay/9f6355d14c3df2eafe5a633bdc504035ae84b38cd16182ff6c6e84aa320197f9/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0b2f7b30e945705567d89722fabeeb58/etc-hosts\",\"readonly\":false,\"propagation\":0,\"
selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0b2f7b30e945705567d89722fabeeb58/containers/etcd/122e4126\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0b2f7b30e945705567d89722fabeeb58","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.61.85:2379","kubernetes.io/config.hash":"0b2f7b30e945705567d89722fabeeb58","kubernetes.io/config.seen":"2025-10-27T22:51:35.322253451Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVe
rsion":"1.2.1","id":"fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95/userdata","rootfs":"/var/lib/containers/storage/overlay/4c0a0a4bac28a74c90abc29bec2dbf9224f02d1ae118d9fe56828a3bc04ac70f/merged","created":"2025-10-27T22:51:36.35650571Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"af42bbeb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"af42bbeb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"contain
erPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-27T22:51:36.219925868Z","io.kubernetes.cri-o.Image":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri-o.ImageRef":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-216520\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c4949aab1d2885e95c9ca3a2ce
576786\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-216520_c4949aab1d2885e95c9ca3a2ce576786/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c0a0a4bac28a74c90abc29bec2dbf9224f02d1ae118d9fe56828a3bc04ac70f/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-216520_kube-system_c4949aab1d2885e95c9ca3a2ce576786_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o
.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c4949aab1d2885e95c9ca3a2ce576786/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c4949aab1d2885e95c9ca3a2ce576786/containers/kube-scheduler/144e2b8b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-216520","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.hash":"c4949aab1d2885e95c9ca3a2ce576786","kubernetes.io/config.seen":"2025-10-27T22:51:35.315
626151Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1027 22:53:20.082437  387237 cri.go:126] list returned 14 containers
	I1027 22:53:20.082457  387237 cri.go:129] container: {ID:102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969 Status:stopped}
	I1027 22:53:20.082472  387237 cri.go:131] skipping 102cb7d49ab293314eac4b3b8507dc7e3e6b74ad9a2cb2999400caa5be497969 - not in ps
	I1027 22:53:20.082477  387237 cri.go:129] container: {ID:270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e Status:stopped}
	I1027 22:53:20.082487  387237 cri.go:131] skipping 270ec61362bac468faa55774c31d0bef4874c35479f0d6a9fe3a1983230cdb2e - not in ps
	I1027 22:53:20.082493  387237 cri.go:129] container: {ID:3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236 Status:stopped}
	I1027 22:53:20.082503  387237 cri.go:131] skipping 3741bbd3d0badd3cc3aa6ed10105360b265f388662cf2bb9d72a6e115f2e9236 - not in ps
	I1027 22:53:20.082507  387237 cri.go:129] container: {ID:568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3 Status:stopped}
	I1027 22:53:20.082511  387237 cri.go:131] skipping 568df829efb9f986af0f8df8931f3a60217090d4a29ef61138e3db1b79b891b3 - not in ps
	I1027 22:53:20.082514  387237 cri.go:129] container: {ID:66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3 Status:stopped}
	I1027 22:53:20.082520  387237 cri.go:135] skipping {66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3 stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082530  387237 cri.go:129] container: {ID:7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de Status:stopped}
	I1027 22:53:20.082534  387237 cri.go:135] skipping {7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082539  387237 cri.go:129] container: {ID:77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab Status:stopped}
	I1027 22:53:20.082544  387237 cri.go:131] skipping 77d17d6ff34cbe3e0679820be3536e526f77076562bc5d7d398035b5b42a69ab - not in ps
	I1027 22:53:20.082550  387237 cri.go:129] container: {ID:784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a Status:stopped}
	I1027 22:53:20.082554  387237 cri.go:135] skipping {784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082558  387237 cri.go:129] container: {ID:7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53 Status:stopped}
	I1027 22:53:20.082563  387237 cri.go:131] skipping 7a1287582ea113fc5513ffe867387a5de43820e7d49ca941b480c24cdc126a53 - not in ps
	I1027 22:53:20.082570  387237 cri.go:129] container: {ID:8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69 Status:stopped}
	I1027 22:53:20.082577  387237 cri.go:131] skipping 8f7668861e14a84b04450d1eacedee0ae828f3989e051240a9e5a0e09166bb69 - not in ps
	I1027 22:53:20.082582  387237 cri.go:129] container: {ID:d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf Status:stopped}
	I1027 22:53:20.082589  387237 cri.go:131] skipping d73c7f81aa8d253a7ed35621b2758ad45219a2d7d719dd868c57521ce7ba7ebf - not in ps
	I1027 22:53:20.082595  387237 cri.go:129] container: {ID:e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b Status:stopped}
	I1027 22:53:20.082607  387237 cri.go:135] skipping {e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082612  387237 cri.go:129] container: {ID:f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3 Status:stopped}
	I1027 22:53:20.082616  387237 cri.go:135] skipping {f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3 stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082622  387237 cri.go:129] container: {ID:fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95 Status:stopped}
	I1027 22:53:20.082625  387237 cri.go:135] skipping {fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95 stopped}: state = "stopped", want "paused"
	I1027 22:53:20.082687  387237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:53:20.097377  387237 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:53:20.097408  387237 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:53:20.097478  387237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:53:20.117013  387237 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:53:20.117773  387237 kubeconfig.go:125] found "kubernetes-upgrade-216520" server: "https://192.168.61.85:8443"
	I1027 22:53:20.118650  387237 kapi.go:59] client config for kubernetes-upgrade-216520: &rest.Config{Host:"https://192.168.61.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kubernetes-upgrade-216520/client.key", CAFile:"/home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:53:20.119202  387237 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:53:20.119226  387237 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:53:20.119231  387237 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:53:20.119238  387237 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:53:20.119244  387237 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:53:20.119676  387237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:53:20.136120  387237 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.85
	I1027 22:53:20.136163  387237 kubeadm.go:1161] stopping kube-system containers ...
	I1027 22:53:20.136176  387237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 22:53:20.136232  387237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:53:20.183110  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:53:20.183144  387237 cri.go:89] found id: "784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a"
	I1027 22:53:20.183151  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:53:20.183155  387237 cri.go:89] found id: "fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95"
	I1027 22:53:20.183166  387237 cri.go:89] found id: "7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de"
	I1027 22:53:20.183171  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:53:20.183176  387237 cri.go:89] found id: ""
	I1027 22:53:20.183185  387237 cri.go:252] Stopping containers: [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3 784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3 fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95 7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:53:20.183269  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:53:20.188997  387237 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3 784835d5e8f1719a844871310174e69317c403d9539237afbdf221f8b0e2e07a f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3 fe30a5f8ef8bc543fee0cf1a2201297114ce34776f5934b46bff7a10e19e8e95 7405fe419262bdb45c7d34dd39e8852eb12466dede68fa9fde73aa2efb4065de e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b
	I1027 22:53:20.268832  387237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 22:53:20.312344  387237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:53:20.328095  387237 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Oct 27 22:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Oct 27 22:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5725 Oct 27 22:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Oct 27 22:51 /etc/kubernetes/scheduler.conf
	
	I1027 22:53:20.328185  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:53:20.341719  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:53:20.354715  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:53:20.354804  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:53:20.371293  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:53:20.385078  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:53:20.385159  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:53:20.399481  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:53:20.412240  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:53:20.412310  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:53:20.426181  387237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:53:20.440139  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:53:20.501045  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:53:21.992652  387237 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.49152497s)
	I1027 22:53:21.992781  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:53:22.260344  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:53:22.321183  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:53:22.394379  387237 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:53:22.394485  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:22.894558  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:23.394979  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:23.894854  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:24.394847  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:24.895044  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:25.395408  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:25.895397  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:26.395146  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:26.895342  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:27.395120  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:27.895469  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:28.394621  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:28.895598  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:29.394614  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:29.895062  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:30.395485  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:30.894670  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:31.395381  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:31.895175  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:32.395183  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:32.894656  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:33.394842  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:33.895141  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:34.395143  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:34.894903  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:35.395438  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:35.894679  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:36.395085  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:36.894778  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:37.394643  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:37.894956  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:38.394779  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:38.895163  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:39.395116  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:39.895308  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:40.395599  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:40.895177  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:41.395094  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:41.894540  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:42.394930  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:42.895542  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:43.395543  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:43.895273  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:44.395002  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:44.894953  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:45.395059  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:45.895160  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:46.395218  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:46.894812  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:47.394634  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:47.895444  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:48.395401  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:48.895156  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:49.394804  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:49.895005  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:50.394583  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:50.894813  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:51.395421  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:51.895434  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:52.395253  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:52.895489  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:53.394561  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:53.895008  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:54.394650  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:54.895372  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:55.394934  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:55.894839  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:56.395396  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:56.895575  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:57.395512  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:57.895591  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:58.395644  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:58.895367  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:59.395464  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:53:59.895572  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:00.395657  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:00.895593  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:01.395024  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:01.895010  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:02.395551  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:02.895192  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:03.395155  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:03.895453  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:04.395341  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:04.894758  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:05.394678  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:05.895393  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:06.394903  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:06.895049  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:07.395489  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:07.894651  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:08.394707  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:08.894991  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:09.394910  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:09.894540  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:10.395176  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:10.894766  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:11.394620  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:11.895298  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:12.395302  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:12.895113  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:13.394718  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:13.894836  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:14.394666  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:14.895428  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:15.395113  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:15.895159  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:16.395133  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:16.895411  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:17.394706  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:17.894938  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:18.395070  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:18.895596  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:19.395528  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:19.895283  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:20.395146  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:20.894834  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:21.394693  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:21.894553  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:22.395081  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:22.395177  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:22.444137  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:22.444169  387237 cri.go:89] found id: ""
	I1027 22:54:22.444180  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:22.444256  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.451095  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:22.451180  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:22.494940  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:22.494978  387237 cri.go:89] found id: ""
	I1027 22:54:22.494991  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:22.495071  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.502510  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:22.502610  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:22.553004  387237 cri.go:89] found id: ""
	I1027 22:54:22.553049  387237 logs.go:282] 0 containers: []
	W1027 22:54:22.553062  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:22.553071  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:22.553145  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:22.606878  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:22.606938  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:22.606945  387237 cri.go:89] found id: ""
	I1027 22:54:22.606955  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:22.607042  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.612560  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.618223  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:22.618299  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:22.673564  387237 cri.go:89] found id: ""
	I1027 22:54:22.673595  387237 logs.go:282] 0 containers: []
	W1027 22:54:22.673606  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:22.673613  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:22.673716  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:22.724711  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:22.724748  387237 cri.go:89] found id: "23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	I1027 22:54:22.724755  387237 cri.go:89] found id: ""
	I1027 22:54:22.724768  387237 logs.go:282] 2 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761]
	I1027 22:54:22.724852  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.732692  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:22.738025  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:22.738113  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:22.795025  387237 cri.go:89] found id: ""
	I1027 22:54:22.795065  387237 logs.go:282] 0 containers: []
	W1027 22:54:22.795077  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:22.795085  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:22.795160  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:22.847871  387237 cri.go:89] found id: ""
	I1027 22:54:22.847941  387237 logs.go:282] 0 containers: []
	W1027 22:54:22.847956  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:22.847971  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:22.847988  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:22.867716  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:22.867757  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:22.932943  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:22.932988  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:23.358384  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:23.358428  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:23.414995  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:23.415049  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:23.543849  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:23.543908  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:23.627842  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:23.627901  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:23.627923  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:23.706041  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:23.706084  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:23.780729  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:23.780772  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:23.847529  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:23.847577  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:23.900401  387237 logs.go:123] Gathering logs for kube-controller-manager [23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761] ...
	I1027 22:54:23.900433  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	I1027 22:54:26.446069  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:26.467718  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:26.467803  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:26.511698  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:26.511738  387237 cri.go:89] found id: ""
	I1027 22:54:26.511749  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:26.511811  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.517266  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:26.517362  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:26.561242  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:26.561280  387237 cri.go:89] found id: ""
	I1027 22:54:26.561293  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:26.561365  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.571369  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:26.571448  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:26.623204  387237 cri.go:89] found id: ""
	I1027 22:54:26.623244  387237 logs.go:282] 0 containers: []
	W1027 22:54:26.623257  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:26.623266  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:26.623348  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:26.670422  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:26.670460  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:26.670466  387237 cri.go:89] found id: ""
	I1027 22:54:26.670478  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:26.670550  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.677680  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.682744  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:26.682845  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:26.728133  387237 cri.go:89] found id: ""
	I1027 22:54:26.728187  387237 logs.go:282] 0 containers: []
	W1027 22:54:26.728199  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:26.728227  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:26.728317  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:26.773384  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:26.773413  387237 cri.go:89] found id: "23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	I1027 22:54:26.773419  387237 cri.go:89] found id: ""
	I1027 22:54:26.773430  387237 logs.go:282] 2 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761]
	I1027 22:54:26.773495  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.780734  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:26.788034  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:26.788123  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:26.838607  387237 cri.go:89] found id: ""
	I1027 22:54:26.838640  387237 logs.go:282] 0 containers: []
	W1027 22:54:26.838651  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:26.838659  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:26.838733  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:26.897572  387237 cri.go:89] found id: ""
	I1027 22:54:26.897605  387237 logs.go:282] 0 containers: []
	W1027 22:54:26.897616  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:26.897628  387237 logs.go:123] Gathering logs for kube-controller-manager [23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761] ...
	I1027 22:54:26.897646  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	I1027 22:54:26.945530  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:26.945564  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:27.240606  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:27.240667  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:27.259287  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:27.259326  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:27.309246  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:27.309288  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:27.365569  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:27.365610  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:27.482252  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:27.482309  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:27.582427  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:27.582462  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:27.582488  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:27.662073  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:27.662121  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:27.724444  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:27.724495  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:27.803542  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:27.803588  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:30.360001  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:30.382805  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:30.382912  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:30.430490  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:30.430525  387237 cri.go:89] found id: ""
	I1027 22:54:30.430537  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:30.430663  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.436015  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:30.436094  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:30.487159  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:30.487190  387237 cri.go:89] found id: ""
	I1027 22:54:30.487201  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:30.487264  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.492315  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:30.492384  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:30.538858  387237 cri.go:89] found id: ""
	I1027 22:54:30.538908  387237 logs.go:282] 0 containers: []
	W1027 22:54:30.538917  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:30.538923  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:30.538977  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:30.587868  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:30.587916  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:30.587923  387237 cri.go:89] found id: ""
	I1027 22:54:30.587940  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:30.588004  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.593165  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.597976  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:30.598045  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:30.640363  387237 cri.go:89] found id: ""
	I1027 22:54:30.640395  387237 logs.go:282] 0 containers: []
	W1027 22:54:30.640403  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:30.640409  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:30.640487  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:30.687933  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:30.687964  387237 cri.go:89] found id: "23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	I1027 22:54:30.687971  387237 cri.go:89] found id: ""
	I1027 22:54:30.687982  387237 logs.go:282] 2 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761]
	I1027 22:54:30.688127  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.695198  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:30.700835  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:30.700948  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:30.756131  387237 cri.go:89] found id: ""
	I1027 22:54:30.756175  387237 logs.go:282] 0 containers: []
	W1027 22:54:30.756188  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:30.756197  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:30.756275  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:30.802447  387237 cri.go:89] found id: ""
	I1027 22:54:30.802481  387237 logs.go:282] 0 containers: []
	W1027 22:54:30.802492  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:30.802506  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:30.802524  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:30.860209  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:30.860252  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:30.954443  387237 logs.go:123] Gathering logs for kube-controller-manager [23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761] ...
	I1027 22:54:30.954492  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761"
	W1027 22:54:31.004923  387237 logs.go:130] failed kube-controller-manager [23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23adc0601eef80188cde0353cb30592e96ac6ee858f106ecb7846468387dc761": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:54:30Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/3.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/3.log: no such file or directory"
	 output: 
	** stderr ** 
	time="2025-10-27T22:54:30Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/3.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-216520_fb366827ab7b32d13cb327d8b8d99103/kube-controller-manager/3.log: no such file or directory"
	
	** /stderr **
	I1027 22:54:31.004960  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:31.004977  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:31.090400  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:31.090450  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:31.138567  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:31.138601  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:31.185378  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:31.185422  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:31.479021  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:31.479065  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:31.531193  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:31.531244  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:31.645347  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:31.645393  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:31.662661  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:31.662698  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:31.746085  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:34.246560  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:34.274416  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:34.274504  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:34.331742  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:34.331773  387237 cri.go:89] found id: ""
	I1027 22:54:34.331784  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:34.331854  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:34.339423  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:34.339532  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:34.398433  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:34.398471  387237 cri.go:89] found id: ""
	I1027 22:54:34.398483  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:34.398555  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:34.404262  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:34.404370  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:34.451196  387237 cri.go:89] found id: ""
	I1027 22:54:34.451231  387237 logs.go:282] 0 containers: []
	W1027 22:54:34.451252  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:34.451260  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:34.451318  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:34.507011  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:34.507046  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:34.507053  387237 cri.go:89] found id: ""
	I1027 22:54:34.507066  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:34.507137  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:34.514281  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:34.521354  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:34.521441  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:34.575541  387237 cri.go:89] found id: ""
	I1027 22:54:34.575582  387237 logs.go:282] 0 containers: []
	W1027 22:54:34.575594  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:34.575603  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:34.575694  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:34.626148  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:34.626188  387237 cri.go:89] found id: ""
	I1027 22:54:34.626200  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:34.626275  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:34.631575  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:34.631658  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:34.676626  387237 cri.go:89] found id: ""
	I1027 22:54:34.676669  387237 logs.go:282] 0 containers: []
	W1027 22:54:34.676680  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:34.676692  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:34.676748  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:34.723792  387237 cri.go:89] found id: ""
	I1027 22:54:34.723823  387237 logs.go:282] 0 containers: []
	W1027 22:54:34.723832  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:34.723855  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:34.723873  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:34.800749  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:34.800781  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:34.800802  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:35.074476  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:35.074526  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:35.133937  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:35.133984  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:35.234553  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:35.234597  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:35.255463  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:35.255511  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:35.349249  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:35.349307  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:35.420720  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:35.420763  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:35.508136  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:35.508179  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:35.557150  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:35.557192  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:38.104393  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:38.125200  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:38.125293  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:38.178841  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:38.178882  387237 cri.go:89] found id: ""
	I1027 22:54:38.178914  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:38.178988  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:38.185231  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:38.185310  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:38.246927  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:38.246966  387237 cri.go:89] found id: ""
	I1027 22:54:38.246978  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:38.247057  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:38.254368  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:38.254450  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:38.303064  387237 cri.go:89] found id: ""
	I1027 22:54:38.303107  387237 logs.go:282] 0 containers: []
	W1027 22:54:38.303118  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:38.303125  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:38.303195  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:38.355838  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:38.355870  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:38.355882  387237 cri.go:89] found id: ""
	I1027 22:54:38.355915  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:38.356013  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:38.362327  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:38.369472  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:38.369565  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:38.431780  387237 cri.go:89] found id: ""
	I1027 22:54:38.431812  387237 logs.go:282] 0 containers: []
	W1027 22:54:38.431823  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:38.431837  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:38.431908  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:38.496875  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:38.496930  387237 cri.go:89] found id: ""
	I1027 22:54:38.496943  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:38.497018  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:38.504288  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:38.504375  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:38.564508  387237 cri.go:89] found id: ""
	I1027 22:54:38.564564  387237 logs.go:282] 0 containers: []
	W1027 22:54:38.564580  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:38.564588  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:38.564688  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:38.620460  387237 cri.go:89] found id: ""
	I1027 22:54:38.620495  387237 logs.go:282] 0 containers: []
	W1027 22:54:38.620507  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:38.620529  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:38.620545  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:38.732522  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:38.732563  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:38.754741  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:38.754774  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:38.840853  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:38.840876  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:38.840927  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:38.890909  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:38.890953  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:39.151023  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:39.151069  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:39.205956  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:39.205996  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:39.292094  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:39.292141  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:39.360060  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:39.360113  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:39.446854  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:39.446910  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:41.992024  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:42.020673  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:42.020751  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:42.079849  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:42.079878  387237 cri.go:89] found id: ""
	I1027 22:54:42.079911  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:42.080061  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:42.086514  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:42.086610  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:42.161697  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:42.161729  387237 cri.go:89] found id: ""
	I1027 22:54:42.161740  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:42.161811  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:42.169479  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:42.169581  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:42.218281  387237 cri.go:89] found id: ""
	I1027 22:54:42.218315  387237 logs.go:282] 0 containers: []
	W1027 22:54:42.218326  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:42.218335  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:42.218404  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:42.271943  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:42.271976  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:42.271984  387237 cri.go:89] found id: ""
	I1027 22:54:42.271996  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:42.272079  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:42.278153  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:42.283451  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:42.283533  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:42.337690  387237 cri.go:89] found id: ""
	I1027 22:54:42.337739  387237 logs.go:282] 0 containers: []
	W1027 22:54:42.337753  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:42.337763  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:42.337850  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:42.404263  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:42.404287  387237 cri.go:89] found id: ""
	I1027 22:54:42.404298  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:42.404367  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:42.411089  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:42.411176  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:42.464773  387237 cri.go:89] found id: ""
	I1027 22:54:42.464804  387237 logs.go:282] 0 containers: []
	W1027 22:54:42.464821  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:42.464827  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:42.464936  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:42.511126  387237 cri.go:89] found id: ""
	I1027 22:54:42.511158  387237 logs.go:282] 0 containers: []
	W1027 22:54:42.511169  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:42.511190  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:42.511207  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:42.590037  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:42.590079  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:42.645071  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:42.645109  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:42.707253  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:42.707301  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:42.773545  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:42.773586  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:42.892324  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:42.892380  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:42.917291  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:42.917326  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:42.996763  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:42.996817  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:43.070399  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:43.070444  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:43.381736  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:43.381794  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:43.479141  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:45.979423  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:46.004291  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:46.004386  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:46.059105  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:46.059139  387237 cri.go:89] found id: ""
	I1027 22:54:46.059151  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:46.059225  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:46.065758  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:46.065850  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:46.111940  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:46.111991  387237 cri.go:89] found id: ""
	I1027 22:54:46.112005  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:46.112086  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:46.119300  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:46.119387  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:46.177240  387237 cri.go:89] found id: ""
	I1027 22:54:46.177275  387237 logs.go:282] 0 containers: []
	W1027 22:54:46.177320  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:46.177332  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:46.177408  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:46.237042  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:46.237070  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:46.237074  387237 cri.go:89] found id: ""
	I1027 22:54:46.237082  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:46.237148  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:46.242973  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:46.248355  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:46.248462  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:46.298920  387237 cri.go:89] found id: ""
	I1027 22:54:46.298955  387237 logs.go:282] 0 containers: []
	W1027 22:54:46.298965  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:46.298974  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:46.299049  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:46.357027  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:46.357058  387237 cri.go:89] found id: ""
	I1027 22:54:46.357070  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:46.357145  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:46.362845  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:46.362948  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:46.407089  387237 cri.go:89] found id: ""
	I1027 22:54:46.407119  387237 logs.go:282] 0 containers: []
	W1027 22:54:46.407127  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:46.407133  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:46.407199  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:46.451782  387237 cri.go:89] found id: ""
	I1027 22:54:46.451813  387237 logs.go:282] 0 containers: []
	W1027 22:54:46.451821  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:46.451838  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:46.451852  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:46.500090  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:46.500144  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:46.519068  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:46.519106  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:46.594058  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:46.594085  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:46.594103  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:46.657982  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:46.658026  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:46.711017  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:46.711049  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:46.837934  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:46.837985  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:46.920315  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:46.920358  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:46.991754  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:46.991806  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:47.035319  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:47.035362  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:49.804829  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:49.824506  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:49.824585  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:49.872929  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:49.872965  387237 cri.go:89] found id: ""
	I1027 22:54:49.872976  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:49.873052  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:49.878242  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:49.878335  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:49.926480  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:49.926507  387237 cri.go:89] found id: ""
	I1027 22:54:49.926517  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:49.926582  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:49.932038  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:49.932127  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:49.976821  387237 cri.go:89] found id: ""
	I1027 22:54:49.976863  387237 logs.go:282] 0 containers: []
	W1027 22:54:49.976874  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:49.976883  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:49.976968  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:50.036063  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:50.036100  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:50.036107  387237 cri.go:89] found id: ""
	I1027 22:54:50.036118  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:50.036198  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:50.043537  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:50.051737  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:50.051836  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:50.106555  387237 cri.go:89] found id: ""
	I1027 22:54:50.106589  387237 logs.go:282] 0 containers: []
	W1027 22:54:50.106602  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:50.106612  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:50.106689  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:50.178148  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:50.178233  387237 cri.go:89] found id: ""
	I1027 22:54:50.178254  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:50.178359  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:50.184727  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:50.184829  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:50.233352  387237 cri.go:89] found id: ""
	I1027 22:54:50.233389  387237 logs.go:282] 0 containers: []
	W1027 22:54:50.233400  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:50.233408  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:50.233484  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:50.280174  387237 cri.go:89] found id: ""
	I1027 22:54:50.280211  387237 logs.go:282] 0 containers: []
	W1027 22:54:50.280222  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:50.280243  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:50.280261  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:50.353926  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:50.353972  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:50.407752  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:50.407791  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:50.466012  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:50.466058  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:50.541753  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:50.541781  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:50.541798  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:50.633005  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:50.633056  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:50.689118  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:50.689153  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:50.970096  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:50.970141  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:51.081174  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:51.081228  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:51.103669  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:51.103718  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:53.662638  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:53.682413  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:53.682487  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:53.735126  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:53.735160  387237 cri.go:89] found id: ""
	I1027 22:54:53.735171  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:53.735244  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:53.742251  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:53.742346  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:53.793510  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:53.793542  387237 cri.go:89] found id: ""
	I1027 22:54:53.793554  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:53.793623  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:53.798989  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:53.799072  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:53.850149  387237 cri.go:89] found id: ""
	I1027 22:54:53.850187  387237 logs.go:282] 0 containers: []
	W1027 22:54:53.850198  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:53.850205  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:53.850278  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:53.901112  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:53.901144  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:53.901150  387237 cri.go:89] found id: ""
	I1027 22:54:53.901160  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:53.901232  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:53.906254  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:53.911126  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:53.911207  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:53.962215  387237 cri.go:89] found id: ""
	I1027 22:54:53.962251  387237 logs.go:282] 0 containers: []
	W1027 22:54:53.962262  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:53.962270  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:53.962341  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:54.010752  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:54.010784  387237 cri.go:89] found id: ""
	I1027 22:54:54.010796  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:54.010870  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:54.015839  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:54.015939  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:54.062742  387237 cri.go:89] found id: ""
	I1027 22:54:54.062774  387237 logs.go:282] 0 containers: []
	W1027 22:54:54.062786  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:54.062795  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:54.062871  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:54.116948  387237 cri.go:89] found id: ""
	I1027 22:54:54.116985  387237 logs.go:282] 0 containers: []
	W1027 22:54:54.116995  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:54.117018  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:54.117034  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:54.141457  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:54.141507  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:54.221509  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:54.221575  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:54.310333  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:54.310374  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:54.351750  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:54.351795  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:54:54.612291  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:54.612338  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:54.669927  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:54.669962  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:54.758833  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:54.758863  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:54.758881  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:54.850633  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:54.850691  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:54.910685  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:54.910730  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:57.566622  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:54:57.596383  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:54:57.596468  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:54:57.647324  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:57.647357  387237 cri.go:89] found id: ""
	I1027 22:54:57.647367  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:54:57.647439  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:57.654716  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:54:57.654820  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:54:57.699912  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:57.699948  387237 cri.go:89] found id: ""
	I1027 22:54:57.699958  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:54:57.700027  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:57.705378  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:54:57.705456  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:54:57.745859  387237 cri.go:89] found id: ""
	I1027 22:54:57.745908  387237 logs.go:282] 0 containers: []
	W1027 22:54:57.745920  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:54:57.745928  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:54:57.746004  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:54:57.786060  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:57.786096  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:57.786102  387237 cri.go:89] found id: ""
	I1027 22:54:57.786115  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:54:57.786189  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:57.791249  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:57.796160  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:54:57.796245  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:54:57.837121  387237 cri.go:89] found id: ""
	I1027 22:54:57.837154  387237 logs.go:282] 0 containers: []
	W1027 22:54:57.837165  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:54:57.837174  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:54:57.837247  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:54:57.893900  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:57.893928  387237 cri.go:89] found id: ""
	I1027 22:54:57.893940  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:54:57.894006  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:54:57.899356  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:54:57.899446  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:54:57.949564  387237 cri.go:89] found id: ""
	I1027 22:54:57.949597  387237 logs.go:282] 0 containers: []
	W1027 22:54:57.949607  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:54:57.949615  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:54:57.949686  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:54:57.997353  387237 cri.go:89] found id: ""
	I1027 22:54:57.997392  387237 logs.go:282] 0 containers: []
	W1027 22:54:57.997400  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:54:57.997415  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:54:57.997426  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:54:58.092497  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:54:58.092544  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:54:58.113028  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:54:58.113068  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:54:58.184426  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:54:58.184449  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:54:58.184461  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:54:58.263228  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:54:58.263264  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:54:58.324619  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:54:58.324659  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:54:58.391898  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:54:58.391943  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:54:58.436801  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:54:58.436840  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:54:58.489013  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:54:58.489067  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:54:58.538306  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:54:58.538343  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:01.306291  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:01.326839  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:01.326953  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:01.381599  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:01.381629  387237 cri.go:89] found id: ""
	I1027 22:55:01.381648  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:01.381712  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:01.387929  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:01.388025  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:01.450416  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:01.450468  387237 cri.go:89] found id: ""
	I1027 22:55:01.450480  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:01.450557  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:01.457633  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:01.457735  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:01.514052  387237 cri.go:89] found id: ""
	I1027 22:55:01.514084  387237 logs.go:282] 0 containers: []
	W1027 22:55:01.514094  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:01.514101  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:01.514169  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:01.567972  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:01.567998  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:01.568005  387237 cri.go:89] found id: ""
	I1027 22:55:01.568015  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:01.568079  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:01.574942  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:01.581811  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:01.581922  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:01.630350  387237 cri.go:89] found id: ""
	I1027 22:55:01.630377  387237 logs.go:282] 0 containers: []
	W1027 22:55:01.630386  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:01.630395  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:01.630461  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:01.676762  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:01.676790  387237 cri.go:89] found id: ""
	I1027 22:55:01.676800  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:01.676872  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:01.683589  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:01.683671  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:01.743322  387237 cri.go:89] found id: ""
	I1027 22:55:01.743353  387237 logs.go:282] 0 containers: []
	W1027 22:55:01.743361  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:01.743367  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:01.743434  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:01.802786  387237 cri.go:89] found id: ""
	I1027 22:55:01.802829  387237 logs.go:282] 0 containers: []
	W1027 22:55:01.802838  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:01.802855  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:01.802872  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:02.107481  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:02.107531  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:02.174942  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:02.174993  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:02.307615  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:02.307644  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:02.398442  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:02.398508  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:02.449373  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:02.449417  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:02.469132  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:02.469165  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:02.554477  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:02.554525  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:02.554546  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:02.636397  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:02.636439  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:02.705696  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:02.705758  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:05.266945  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:05.296629  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:05.296735  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:05.358967  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:05.358998  387237 cri.go:89] found id: ""
	I1027 22:55:05.359011  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:05.359087  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:05.367206  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:05.367305  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:05.438405  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:05.438518  387237 cri.go:89] found id: ""
	I1027 22:55:05.438534  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:05.438623  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:05.446642  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:05.446752  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:05.514579  387237 cri.go:89] found id: ""
	I1027 22:55:05.514836  387237 logs.go:282] 0 containers: []
	W1027 22:55:05.514860  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:05.514873  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:05.514971  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:05.570420  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:05.570476  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:05.570483  387237 cri.go:89] found id: ""
	I1027 22:55:05.570495  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:05.570573  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:05.578456  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:05.586013  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:05.586097  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:05.646270  387237 cri.go:89] found id: ""
	I1027 22:55:05.646308  387237 logs.go:282] 0 containers: []
	W1027 22:55:05.646319  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:05.646327  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:05.646401  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:05.705878  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:05.705929  387237 cri.go:89] found id: ""
	I1027 22:55:05.705940  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:05.706013  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:05.712970  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:05.713046  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:05.768938  387237 cri.go:89] found id: ""
	I1027 22:55:05.768971  387237 logs.go:282] 0 containers: []
	W1027 22:55:05.768982  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:05.768990  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:05.769062  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:05.840678  387237 cri.go:89] found id: ""
	I1027 22:55:05.840713  387237 logs.go:282] 0 containers: []
	W1027 22:55:05.840725  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:05.840748  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:05.840762  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:05.866408  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:05.866459  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:05.959690  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:05.959726  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:05.959742  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:06.058752  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:06.058809  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:06.109138  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:06.109180  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:06.433057  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:06.433108  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:06.553392  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:06.553446  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:06.626537  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:06.626580  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:06.706693  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:06.706749  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:06.773515  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:06.773565  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:09.340595  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:09.366632  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:09.366704  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:09.415731  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:09.415767  387237 cri.go:89] found id: ""
	I1027 22:55:09.415778  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:09.415864  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:09.421583  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:09.421688  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:09.465499  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:09.465532  387237 cri.go:89] found id: ""
	I1027 22:55:09.465552  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:09.465626  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:09.471422  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:09.471494  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:09.519243  387237 cri.go:89] found id: ""
	I1027 22:55:09.519276  387237 logs.go:282] 0 containers: []
	W1027 22:55:09.519285  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:09.519293  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:09.519354  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:09.564862  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:09.564911  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:09.564917  387237 cri.go:89] found id: ""
	I1027 22:55:09.564933  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:09.565006  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:09.570721  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:09.575966  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:09.576049  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:09.618435  387237 cri.go:89] found id: ""
	I1027 22:55:09.618468  387237 logs.go:282] 0 containers: []
	W1027 22:55:09.618479  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:09.618486  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:09.618560  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:09.663969  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:09.664003  387237 cri.go:89] found id: ""
	I1027 22:55:09.664014  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:09.664095  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:09.669317  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:09.669406  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:09.711855  387237 cri.go:89] found id: ""
	I1027 22:55:09.711905  387237 logs.go:282] 0 containers: []
	W1027 22:55:09.711928  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:09.711937  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:09.712011  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:09.759837  387237 cri.go:89] found id: ""
	I1027 22:55:09.759877  387237 logs.go:282] 0 containers: []
	W1027 22:55:09.759907  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:09.759931  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:09.759948  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:09.824054  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:09.824097  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:09.874208  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:09.874240  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:10.167086  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:10.167138  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:10.219156  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:10.219211  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:10.323196  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:10.323245  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:10.414261  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:10.414297  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:10.414320  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:10.471352  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:10.471396  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:10.490391  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:10.490427  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:10.578648  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:10.578698  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:13.146366  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:13.168689  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:13.168768  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:13.225548  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:13.225578  387237 cri.go:89] found id: ""
	I1027 22:55:13.225590  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:13.225672  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:13.232612  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:13.232702  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:13.285927  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:13.285960  387237 cri.go:89] found id: ""
	I1027 22:55:13.285972  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:13.286044  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:13.291979  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:13.292077  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:13.346552  387237 cri.go:89] found id: ""
	I1027 22:55:13.346596  387237 logs.go:282] 0 containers: []
	W1027 22:55:13.346605  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:13.346611  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:13.346683  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:13.397089  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:13.397117  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:13.397123  387237 cri.go:89] found id: ""
	I1027 22:55:13.397132  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:13.397204  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:13.403722  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:13.409935  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:13.410033  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:13.465533  387237 cri.go:89] found id: ""
	I1027 22:55:13.465565  387237 logs.go:282] 0 containers: []
	W1027 22:55:13.465575  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:13.465581  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:13.465653  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:13.516006  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:13.516035  387237 cri.go:89] found id: ""
	I1027 22:55:13.516044  387237 logs.go:282] 1 containers: [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:13.516113  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:13.523632  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:13.523748  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:13.573177  387237 cri.go:89] found id: ""
	I1027 22:55:13.573218  387237 logs.go:282] 0 containers: []
	W1027 22:55:13.573232  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:13.573242  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:13.573318  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:13.623127  387237 cri.go:89] found id: ""
	I1027 22:55:13.623163  387237 logs.go:282] 0 containers: []
	W1027 22:55:13.623175  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:13.623194  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:13.623212  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:13.706987  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:13.707023  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:13.707041  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:13.794354  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:13.794401  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:13.878985  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:13.879030  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:13.931262  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:13.931309  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:13.955530  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:13.955569  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:14.021608  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:14.021656  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:14.067414  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:14.067456  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:14.404218  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:14.404276  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:14.469197  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:14.469242  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:17.130560  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:17.158576  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:17.158679  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:17.221519  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:17.221549  387237 cri.go:89] found id: ""
	I1027 22:55:17.221559  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:17.221626  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.228183  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:17.228265  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:17.275280  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:17.275315  387237 cri.go:89] found id: ""
	I1027 22:55:17.275330  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:17.275399  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.280901  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:17.280994  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:17.345327  387237 cri.go:89] found id: ""
	I1027 22:55:17.345361  387237 logs.go:282] 0 containers: []
	W1027 22:55:17.345371  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:17.345379  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:17.345450  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:17.412089  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:17.412122  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:17.412128  387237 cri.go:89] found id: ""
	I1027 22:55:17.412139  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:17.412217  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.420520  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.429466  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:17.429668  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:17.506762  387237 cri.go:89] found id: ""
	I1027 22:55:17.506804  387237 logs.go:282] 0 containers: []
	W1027 22:55:17.506814  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:17.506822  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:17.506908  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:17.572990  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:17.573020  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:17.573026  387237 cri.go:89] found id: ""
	I1027 22:55:17.573037  387237 logs.go:282] 2 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:17.573109  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.578428  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:17.583961  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:17.584051  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:17.635273  387237 cri.go:89] found id: ""
	I1027 22:55:17.635303  387237 logs.go:282] 0 containers: []
	W1027 22:55:17.635314  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:17.635322  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:17.635398  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:17.684029  387237 cri.go:89] found id: ""
	I1027 22:55:17.684062  387237 logs.go:282] 0 containers: []
	W1027 22:55:17.684073  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:17.684085  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:17.684101  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:17.705212  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:17.705250  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:17.793678  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:17.793706  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:17.793722  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:17.883570  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:17.883621  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:17.981656  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:17.981712  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:18.035289  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:18.035340  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:18.079403  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:18.079441  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:18.133804  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:18.133841  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:18.286317  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:18.286376  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:18.358519  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:18.358557  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:18.619097  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:18.619145  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:21.180492  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:21.201158  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:21.201234  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:21.250393  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:21.250431  387237 cri.go:89] found id: ""
	I1027 22:55:21.250442  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:21.250512  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.256777  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:21.256875  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:21.300289  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:21.300320  387237 cri.go:89] found id: ""
	I1027 22:55:21.300331  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:21.300402  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.306768  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:21.306865  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:21.354992  387237 cri.go:89] found id: ""
	I1027 22:55:21.355023  387237 logs.go:282] 0 containers: []
	W1027 22:55:21.355034  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:21.355041  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:21.355108  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:21.404966  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:21.404998  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:21.405003  387237 cri.go:89] found id: ""
	I1027 22:55:21.405013  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:21.405090  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.412840  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.419045  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:21.419152  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:21.470330  387237 cri.go:89] found id: ""
	I1027 22:55:21.470373  387237 logs.go:282] 0 containers: []
	W1027 22:55:21.470383  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:21.470391  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:21.470459  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:21.521218  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:21.521251  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:21.521255  387237 cri.go:89] found id: ""
	I1027 22:55:21.521264  387237 logs.go:282] 2 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:21.521340  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.526914  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:21.532062  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:21.532154  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:21.581682  387237 cri.go:89] found id: ""
	I1027 22:55:21.581714  387237 logs.go:282] 0 containers: []
	W1027 22:55:21.581722  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:21.581728  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:21.581788  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:21.632129  387237 cri.go:89] found id: ""
	I1027 22:55:21.632162  387237 logs.go:282] 0 containers: []
	W1027 22:55:21.632171  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:21.632183  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:21.632196  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:21.754265  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:21.754324  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:21.836405  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:21.836432  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:21.836447  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:21.908648  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:21.908698  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:21.971903  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:21.971955  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:22.060577  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:22.060623  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:22.319473  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:22.319522  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:22.336764  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:22.336815  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:22.388913  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:22.388957  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:22.441717  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:22.441754  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:22.491828  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:22.491879  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:25.054769  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:25.079373  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:25.079452  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:25.132944  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:25.132973  387237 cri.go:89] found id: ""
	I1027 22:55:25.132985  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:25.133055  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.139022  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:25.139109  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:25.186760  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:25.186801  387237 cri.go:89] found id: ""
	I1027 22:55:25.186813  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:25.186922  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.192688  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:25.192777  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:25.234568  387237 cri.go:89] found id: ""
	I1027 22:55:25.234603  387237 logs.go:282] 0 containers: []
	W1027 22:55:25.234615  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:25.234624  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:25.234700  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:25.278958  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:25.278996  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:25.279003  387237 cri.go:89] found id: ""
	I1027 22:55:25.279016  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:25.279095  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.284192  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.289708  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:25.289800  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:25.331872  387237 cri.go:89] found id: ""
	I1027 22:55:25.331924  387237 logs.go:282] 0 containers: []
	W1027 22:55:25.331935  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:25.331943  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:25.332010  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:25.376548  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:25.376577  387237 cri.go:89] found id: "ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:25.376582  387237 cri.go:89] found id: ""
	I1027 22:55:25.376599  387237 logs.go:282] 2 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73]
	I1027 22:55:25.376670  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.381740  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:25.387455  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:25.387539  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:25.432333  387237 cri.go:89] found id: ""
	I1027 22:55:25.432372  387237 logs.go:282] 0 containers: []
	W1027 22:55:25.432384  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:25.432394  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:25.432470  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:25.486951  387237 cri.go:89] found id: ""
	I1027 22:55:25.486983  387237 logs.go:282] 0 containers: []
	W1027 22:55:25.486991  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:25.487002  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:25.487014  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:25.537410  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:25.537444  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:25.647554  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:25.647596  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:25.669722  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:25.669757  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:25.753507  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:25.753560  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:25.810484  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:25.810532  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:25.887255  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:25.887296  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:25.939283  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:25.939329  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:25.987832  387237 logs.go:123] Gathering logs for kube-controller-manager [ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73] ...
	I1027 22:55:25.987868  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec9de37a218886a8274c5c657e4c2f4278a79f62150c9eb8ccf3a49084714c73"
	I1027 22:55:26.030630  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:26.030673  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:26.109678  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:26.109712  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:26.109728  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:28.892307  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:28.919848  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:28.919954  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:28.967787  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:28.967816  387237 cri.go:89] found id: ""
	I1027 22:55:28.967827  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:28.967951  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:28.973807  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:28.973907  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:29.024210  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:29.024245  387237 cri.go:89] found id: ""
	I1027 22:55:29.024257  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:29.024333  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:29.030177  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:29.030266  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:29.076181  387237 cri.go:89] found id: ""
	I1027 22:55:29.076221  387237 logs.go:282] 0 containers: []
	W1027 22:55:29.076237  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:29.076245  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:29.076305  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:29.120820  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:29.120856  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:29.120871  387237 cri.go:89] found id: ""
	I1027 22:55:29.120882  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:29.121010  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:29.126696  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:29.131829  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:29.131990  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:29.183285  387237 cri.go:89] found id: ""
	I1027 22:55:29.183316  387237 logs.go:282] 0 containers: []
	W1027 22:55:29.183325  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:29.183330  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:29.183385  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:29.227671  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:29.227702  387237 cri.go:89] found id: ""
	I1027 22:55:29.227711  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:29.227768  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:29.233346  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:29.233425  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:29.280122  387237 cri.go:89] found id: ""
	I1027 22:55:29.280159  387237 logs.go:282] 0 containers: []
	W1027 22:55:29.280168  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:29.280175  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:29.280244  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:29.325133  387237 cri.go:89] found id: ""
	I1027 22:55:29.325170  387237 logs.go:282] 0 containers: []
	W1027 22:55:29.325181  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:29.325198  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:29.325218  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:29.428135  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:29.428177  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:29.489788  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:29.489853  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:29.541112  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:29.541148  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:29.826511  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:29.826577  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:29.846299  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:29.846343  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:29.927836  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:29.927862  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:29.927877  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:30.000249  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:30.000301  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:30.066209  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:30.066258  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:30.110812  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:30.110841  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:32.664581  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:32.685982  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:32.686074  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:32.740683  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:32.740718  387237 cri.go:89] found id: ""
	I1027 22:55:32.740731  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:32.740812  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:32.746596  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:32.746699  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:32.793750  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:32.793776  387237 cri.go:89] found id: ""
	I1027 22:55:32.793787  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:32.793867  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:32.799055  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:32.799143  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:32.854964  387237 cri.go:89] found id: ""
	I1027 22:55:32.854999  387237 logs.go:282] 0 containers: []
	W1027 22:55:32.855011  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:32.855019  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:32.855093  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:32.906248  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:32.906273  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:32.906279  387237 cri.go:89] found id: ""
	I1027 22:55:32.906289  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:32.906363  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:32.911352  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:32.916142  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:32.916289  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:32.958091  387237 cri.go:89] found id: ""
	I1027 22:55:32.958140  387237 logs.go:282] 0 containers: []
	W1027 22:55:32.958153  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:32.958162  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:32.958240  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:33.000602  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:33.000637  387237 cri.go:89] found id: ""
	I1027 22:55:33.000648  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:33.000723  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:33.006148  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:33.006238  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:33.052572  387237 cri.go:89] found id: ""
	I1027 22:55:33.052613  387237 logs.go:282] 0 containers: []
	W1027 22:55:33.052625  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:33.052633  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:33.052707  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:33.100197  387237 cri.go:89] found id: ""
	I1027 22:55:33.100240  387237 logs.go:282] 0 containers: []
	W1027 22:55:33.100253  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:33.100277  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:33.100298  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:33.203599  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:33.203624  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:33.203637  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:33.267729  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:33.267767  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:33.555394  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:33.555434  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:33.676955  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:33.676997  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:33.780072  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:33.780118  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:33.853823  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:33.853862  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:33.929862  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:33.929928  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:33.974119  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:33.974150  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:34.027068  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:34.027101  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:36.546957  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:36.571123  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:36.571199  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:36.619164  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:36.619192  387237 cri.go:89] found id: ""
	I1027 22:55:36.619201  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:36.619278  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:36.624747  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:36.624858  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:36.668946  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:36.668978  387237 cri.go:89] found id: ""
	I1027 22:55:36.668988  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:36.669050  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:36.674299  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:36.674399  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:36.715314  387237 cri.go:89] found id: ""
	I1027 22:55:36.715350  387237 logs.go:282] 0 containers: []
	W1027 22:55:36.715360  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:36.715367  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:36.715438  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:36.765056  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:36.765086  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:36.765092  387237 cri.go:89] found id: ""
	I1027 22:55:36.765103  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:36.765161  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:36.770736  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:36.775883  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:36.775984  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:36.826769  387237 cri.go:89] found id: ""
	I1027 22:55:36.826800  387237 logs.go:282] 0 containers: []
	W1027 22:55:36.826812  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:36.826819  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:36.826901  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:36.877370  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:36.877401  387237 cri.go:89] found id: ""
	I1027 22:55:36.877417  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:36.877501  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:36.882652  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:36.882730  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:36.934078  387237 cri.go:89] found id: ""
	I1027 22:55:36.934108  387237 logs.go:282] 0 containers: []
	W1027 22:55:36.934116  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:36.934123  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:36.934179  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:36.981007  387237 cri.go:89] found id: ""
	I1027 22:55:36.981051  387237 logs.go:282] 0 containers: []
	W1027 22:55:36.981067  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:36.981090  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:36.981106  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:37.230190  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:37.230248  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:37.340767  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:37.340817  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:37.412626  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:37.412681  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:37.468505  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:37.468552  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:37.511115  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:37.511161  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:37.566693  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:37.566736  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:37.587991  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:37.588030  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:37.662313  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:37.662353  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:37.662374  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:37.726372  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:37.726419  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:40.283019  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:40.302500  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:40.302594  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:40.346299  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:40.346328  387237 cri.go:89] found id: ""
	I1027 22:55:40.346337  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:40.346394  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:40.351262  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:40.351328  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:40.392307  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:40.392338  387237 cri.go:89] found id: ""
	I1027 22:55:40.392349  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:40.392420  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:40.397804  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:40.397880  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:40.441476  387237 cri.go:89] found id: ""
	I1027 22:55:40.441517  387237 logs.go:282] 0 containers: []
	W1027 22:55:40.441528  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:40.441537  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:40.441613  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:40.491334  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:40.491368  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:40.491374  387237 cri.go:89] found id: ""
	I1027 22:55:40.491386  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:40.491468  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:40.496725  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:40.503380  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:40.503462  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:40.551478  387237 cri.go:89] found id: ""
	I1027 22:55:40.551520  387237 logs.go:282] 0 containers: []
	W1027 22:55:40.551532  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:40.551542  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:40.551613  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:40.607978  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:40.608008  387237 cri.go:89] found id: ""
	I1027 22:55:40.608017  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:40.608084  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:40.613432  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:40.613523  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:40.670240  387237 cri.go:89] found id: ""
	I1027 22:55:40.670282  387237 logs.go:282] 0 containers: []
	W1027 22:55:40.670295  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:40.670306  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:40.670392  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:40.719747  387237 cri.go:89] found id: ""
	I1027 22:55:40.719789  387237 logs.go:282] 0 containers: []
	W1027 22:55:40.719801  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:40.719832  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:40.719856  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:40.779041  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:40.779095  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:40.827816  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:40.827851  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:41.074305  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:41.074351  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:41.136109  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:41.136153  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:41.239973  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:41.240014  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:41.257403  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:41.257438  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:41.330810  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:41.330846  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:41.330862  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:41.399489  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:41.399535  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:41.442719  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:41.442756  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:44.029437  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:44.049839  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:44.049933  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:44.093526  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:44.093560  387237 cri.go:89] found id: ""
	I1027 22:55:44.093574  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:44.093659  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:44.098654  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:44.098736  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:44.144569  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:44.144606  387237 cri.go:89] found id: ""
	I1027 22:55:44.144617  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:44.144709  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:44.150229  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:44.150329  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:44.195881  387237 cri.go:89] found id: ""
	I1027 22:55:44.195925  387237 logs.go:282] 0 containers: []
	W1027 22:55:44.195937  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:44.195944  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:44.196014  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:44.254745  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:44.254774  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:44.254778  387237 cri.go:89] found id: ""
	I1027 22:55:44.254786  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:44.254859  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:44.260506  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:44.267190  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:44.267278  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:44.307597  387237 cri.go:89] found id: ""
	I1027 22:55:44.307637  387237 logs.go:282] 0 containers: []
	W1027 22:55:44.307649  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:44.307658  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:44.307729  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:44.360743  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:44.360777  387237 cri.go:89] found id: ""
	I1027 22:55:44.360789  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:44.360906  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:44.365910  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:44.366001  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:44.412652  387237 cri.go:89] found id: ""
	I1027 22:55:44.412685  387237 logs.go:282] 0 containers: []
	W1027 22:55:44.412694  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:44.412700  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:44.412772  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:44.457424  387237 cri.go:89] found id: ""
	I1027 22:55:44.457453  387237 logs.go:282] 0 containers: []
	W1027 22:55:44.457460  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:44.457476  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:44.457489  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:44.560829  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:44.560877  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:44.578235  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:44.578273  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:44.653089  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:44.653114  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:44.653130  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:44.714933  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:44.714979  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:44.762011  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:44.762045  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:44.829497  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:44.829545  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:44.886139  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:44.886188  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:44.934830  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:44.934862  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:44.981689  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:44.981727  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:47.743089  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:47.764265  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:47.764341  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:47.810295  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:47.810332  387237 cri.go:89] found id: ""
	I1027 22:55:47.810344  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:47.810414  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:47.817322  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:47.817421  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:47.867809  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:47.867853  387237 cri.go:89] found id: ""
	I1027 22:55:47.867865  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:47.867964  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:47.874506  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:47.874594  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:47.923804  387237 cri.go:89] found id: ""
	I1027 22:55:47.923833  387237 logs.go:282] 0 containers: []
	W1027 22:55:47.923841  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:47.923847  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:47.923929  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:47.969580  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:47.969606  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:47.969610  387237 cri.go:89] found id: ""
	I1027 22:55:47.969619  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:47.969690  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:47.975183  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:47.980414  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:47.980495  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:48.032180  387237 cri.go:89] found id: ""
	I1027 22:55:48.032213  387237 logs.go:282] 0 containers: []
	W1027 22:55:48.032222  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:48.032229  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:48.032301  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:48.082443  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:48.082475  387237 cri.go:89] found id: ""
	I1027 22:55:48.082488  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:48.082553  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:48.088292  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:48.088366  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:48.145348  387237 cri.go:89] found id: ""
	I1027 22:55:48.145394  387237 logs.go:282] 0 containers: []
	W1027 22:55:48.145405  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:48.145413  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:48.145510  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:48.195908  387237 cri.go:89] found id: ""
	I1027 22:55:48.195944  387237 logs.go:282] 0 containers: []
	W1027 22:55:48.195953  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:48.195973  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:48.195987  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:48.215682  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:48.215720  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:48.314307  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:48.314343  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:48.314361  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:48.405265  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:48.405311  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:48.488090  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:48.488133  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:48.545783  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:48.545818  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:48.813882  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:48.813939  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:48.930548  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:48.930602  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:49.004767  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:49.004805  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:49.067534  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:49.067581  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:51.639026  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:51.670042  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:51.670143  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:51.721749  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:51.721779  387237 cri.go:89] found id: ""
	I1027 22:55:51.721789  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:51.721872  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:51.728602  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:51.728702  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:51.786073  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:51.786108  387237 cri.go:89] found id: ""
	I1027 22:55:51.786120  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:51.786204  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:51.791628  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:51.791720  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:51.842755  387237 cri.go:89] found id: ""
	I1027 22:55:51.842795  387237 logs.go:282] 0 containers: []
	W1027 22:55:51.842806  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:51.842813  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:51.842901  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:51.901951  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:51.901987  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:51.901995  387237 cri.go:89] found id: ""
	I1027 22:55:51.902007  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:51.902095  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:51.909070  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:51.914960  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:51.915064  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:51.968642  387237 cri.go:89] found id: ""
	I1027 22:55:51.968686  387237 logs.go:282] 0 containers: []
	W1027 22:55:51.968701  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:51.968710  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:51.968778  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:52.012617  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:52.012652  387237 cri.go:89] found id: ""
	I1027 22:55:52.012665  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:52.012764  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:52.020637  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:52.020707  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:52.064452  387237 cri.go:89] found id: ""
	I1027 22:55:52.064494  387237 logs.go:282] 0 containers: []
	W1027 22:55:52.064506  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:52.064514  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:52.064590  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:52.107320  387237 cri.go:89] found id: ""
	I1027 22:55:52.107352  387237 logs.go:282] 0 containers: []
	W1027 22:55:52.107366  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:52.107386  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:52.107406  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:52.247304  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:52.247363  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:52.269091  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:52.269138  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:52.349622  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:52.349684  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:52.349704  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:52.438970  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:52.439020  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:52.488372  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:52.488427  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:52.538933  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:52.538972  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:52.806797  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:52.806844  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:52.865178  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:52.865215  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:52.934824  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:52.934874  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:55.509360  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:55.538173  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:55.538276  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:55.589367  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:55.589409  387237 cri.go:89] found id: ""
	I1027 22:55:55.589422  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:55.589504  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:55.594745  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:55.594834  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:55.644556  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:55.644586  387237 cri.go:89] found id: ""
	I1027 22:55:55.644599  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:55.644681  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:55.649835  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:55.649944  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:55.701505  387237 cri.go:89] found id: ""
	I1027 22:55:55.701543  387237 logs.go:282] 0 containers: []
	W1027 22:55:55.701554  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:55.701562  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:55.701632  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:55.743249  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:55.743276  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:55.743283  387237 cri.go:89] found id: ""
	I1027 22:55:55.743293  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:55.743361  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:55.748804  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:55.753761  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:55.753853  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:55.804180  387237 cri.go:89] found id: ""
	I1027 22:55:55.804217  387237 logs.go:282] 0 containers: []
	W1027 22:55:55.804226  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:55.804233  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:55.804293  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:55.848084  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:55.848118  387237 cri.go:89] found id: ""
	I1027 22:55:55.848129  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:55.848204  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:55.854176  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:55.854285  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:55.899003  387237 cri.go:89] found id: ""
	I1027 22:55:55.899041  387237 logs.go:282] 0 containers: []
	W1027 22:55:55.899051  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:55.899064  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:55.899124  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:55.946738  387237 cri.go:89] found id: ""
	I1027 22:55:55.946773  387237 logs.go:282] 0 containers: []
	W1027 22:55:55.946783  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:55.946804  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:55:55.946826  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:55:55.963529  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:55.963576  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:55:56.046613  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:55:56.046648  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:55:56.046675  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:56.111208  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:55:56.111252  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:56.157713  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:55:56.157760  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:55:56.437064  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:55:56.437128  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:55:56.488774  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:56.488815  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:56.605150  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:55:56.605207  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:56.687787  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:55:56.687845  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:56.763165  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:55:56.763230  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:55:59.315075  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:55:59.345023  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:55:59.345103  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:55:59.400133  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:55:59.400163  387237 cri.go:89] found id: ""
	I1027 22:55:59.400173  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:55:59.400237  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:59.405972  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:55:59.406070  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:55:59.464297  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:55:59.464336  387237 cri.go:89] found id: ""
	I1027 22:55:59.464349  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:55:59.464429  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:59.470079  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:55:59.470162  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:55:59.521534  387237 cri.go:89] found id: ""
	I1027 22:55:59.521565  387237 logs.go:282] 0 containers: []
	W1027 22:55:59.521573  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:55:59.521579  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:55:59.521638  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:55:59.579268  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:55:59.579302  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:55:59.579308  387237 cri.go:89] found id: ""
	I1027 22:55:59.579319  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:55:59.579390  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:59.587192  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:59.594246  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:55:59.594351  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:55:59.654804  387237 cri.go:89] found id: ""
	I1027 22:55:59.654973  387237 logs.go:282] 0 containers: []
	W1027 22:55:59.654987  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:55:59.654996  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:55:59.655070  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:55:59.712572  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4E1027 23:05:34.979369  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
201"
	I1027 22:55:59.712607  387237 cri.go:89] found id: ""
	I1027 22:55:59.712619  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:55:59.712691  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:55:59.717848  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:55:59.717966  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:55:59.759824  387237 cri.go:89] found id: ""
	I1027 22:55:59.759860  387237 logs.go:282] 0 containers: []
	W1027 22:55:59.759871  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:55:59.759880  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:55:59.759965  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:55:59.805129  387237 cri.go:89] found id: ""
	I1027 22:55:59.805162  387237 logs.go:282] 0 containers: []
	W1027 22:55:59.805173  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:55:59.805196  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:55:59.805210  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:55:59.921881  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:55:59.921933  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:00.010191  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:00.010223  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:00.010246  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:00.110372  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:00.110423  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:00.187882  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:00.187948  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:00.232162  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:00.232194  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:00.249285  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:00.249323  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:00.344266  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:00.344310  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:00.400877  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:00.400941  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:00.709997  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:00.710051  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:03.283911  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:03.305576  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:03.305668  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:03.354102  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:03.354134  387237 cri.go:89] found id: ""
	I1027 22:56:03.354146  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:03.354221  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:03.360818  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:03.360913  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:03.407708  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:03.407737  387237 cri.go:89] found id: ""
	I1027 22:56:03.407753  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:03.407816  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:03.412636  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:03.412723  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:03.458835  387237 cri.go:89] found id: ""
	I1027 22:56:03.458871  387237 logs.go:282] 0 containers: []
	W1027 22:56:03.458883  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:03.458917  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:03.458991  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:03.502930  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:03.502961  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:03.502966  387237 cri.go:89] found id: ""
	I1027 22:56:03.502976  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:03.503037  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:03.508722  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:03.514081  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:03.514179  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:03.559143  387237 cri.go:89] found id: ""
	I1027 22:56:03.559180  387237 logs.go:282] 0 containers: []
	W1027 22:56:03.559191  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:03.559198  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:03.559274  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:03.612821  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:03.612860  387237 cri.go:89] found id: ""
	I1027 22:56:03.612871  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:03.612967  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:03.619096  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:03.619186  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:03.664039  387237 cri.go:89] found id: ""
	I1027 22:56:03.664073  387237 logs.go:282] 0 containers: []
	W1027 22:56:03.664084  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:03.664099  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:03.664176  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:03.708426  387237 cri.go:89] found id: ""
	I1027 22:56:03.708467  387237 logs.go:282] 0 containers: []
	W1027 22:56:03.708477  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:03.708501  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:03.708519  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:03.819828  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:03.819872  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:03.909586  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:03.909625  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:03.909643  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:03.999083  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:03.999143  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:04.025320  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:04.025365  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:04.096220  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:04.096263  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:04.166493  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:04.166541  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:04.204510  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:04.204545  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:04.246011  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:04.246042  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:04.491326  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:04.491374  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:07.043362  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:07.063594  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:07.063707  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:07.107370  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:07.107406  387237 cri.go:89] found id: ""
	I1027 22:56:07.107421  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:07.107501  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:07.112978  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:07.113083  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:07.167091  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:07.167126  387237 cri.go:89] found id: ""
	I1027 22:56:07.167152  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:07.167239  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:07.172415  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:07.172496  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:07.216376  387237 cri.go:89] found id: ""
	I1027 22:56:07.216411  387237 logs.go:282] 0 containers: []
	W1027 22:56:07.216422  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:07.216430  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:07.216497  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:07.263509  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:07.263543  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:07.263551  387237 cri.go:89] found id: ""
	I1027 22:56:07.263562  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:07.263641  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:07.268990  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:07.274287  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:07.274395  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:07.325494  387237 cri.go:89] found id: ""
	I1027 22:56:07.325534  387237 logs.go:282] 0 containers: []
	W1027 22:56:07.325546  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:07.325554  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:07.325647  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:07.374153  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:07.374187  387237 cri.go:89] found id: ""
	I1027 22:56:07.374200  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:07.374273  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:07.379955  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:07.380026  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:07.426208  387237 cri.go:89] found id: ""
	I1027 22:56:07.426255  387237 logs.go:282] 0 containers: []
	W1027 22:56:07.426267  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:07.426274  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:07.426353  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:07.483714  387237 cri.go:89] found id: ""
	I1027 22:56:07.483745  387237 logs.go:282] 0 containers: []
	W1027 22:56:07.483756  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:07.483776  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:07.483795  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:07.575128  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:07.575189  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:07.847746  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:07.847802  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:07.867972  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:07.868016  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:07.951067  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:07.951102  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:07.951123  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:08.035419  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:08.035474  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:08.081702  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:08.081738  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:08.131174  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:08.131207  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:08.186905  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:08.186945  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:08.299208  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:08.299252  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:10.858211  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:10.881169  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:10.881243  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:10.931863  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:10.931913  387237 cri.go:89] found id: ""
	I1027 22:56:10.931925  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:10.932001  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:10.939706  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:10.939813  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:10.990815  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:10.990849  387237 cri.go:89] found id: ""
	I1027 22:56:10.990859  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:10.990934  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:10.998048  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:10.998136  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:11.053249  387237 cri.go:89] found id: ""
	I1027 22:56:11.053284  387237 logs.go:282] 0 containers: []
	W1027 22:56:11.053295  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:11.053303  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:11.053392  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:11.103131  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:11.103162  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:11.103168  387237 cri.go:89] found id: ""
	I1027 22:56:11.103181  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:11.103258  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:11.110034  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:11.116408  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:11.116495  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:11.169371  387237 cri.go:89] found id: ""
	I1027 22:56:11.169407  387237 logs.go:282] 0 containers: []
	W1027 22:56:11.169419  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:11.169435  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:11.169506  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:11.234816  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:11.234840  387237 cri.go:89] found id: ""
	I1027 22:56:11.234849  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:11.234915  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:11.242232  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:11.242313  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:11.294811  387237 cri.go:89] found id: ""
	I1027 22:56:11.294850  387237 logs.go:282] 0 containers: []
	W1027 22:56:11.294861  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:11.294869  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:11.294955  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:11.346136  387237 cri.go:89] found id: ""
	I1027 22:56:11.346172  387237 logs.go:282] 0 containers: []
	W1027 22:56:11.346183  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:11.346205  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:11.346219  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:11.439937  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:11.439972  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:11.439992  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:11.537356  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:11.537399  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:11.620517  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:11.620574  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:11.680679  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:11.680731  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:12.097586  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:12.097652  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:12.179990  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:12.180035  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:12.334641  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:12.334694  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:12.454775  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:12.454837  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:12.526783  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:12.526813  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:15.060043  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:15.121459  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:15.121556  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:15.201188  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:15.201221  387237 cri.go:89] found id: ""
	I1027 22:56:15.201233  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:15.201306  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:15.208436  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:15.208527  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:15.270223  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:15.270254  387237 cri.go:89] found id: ""
	I1027 22:56:15.270265  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:15.270333  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:15.277133  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:15.277219  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:15.331264  387237 cri.go:89] found id: ""
	I1027 22:56:15.331300  387237 logs.go:282] 0 containers: []
	W1027 22:56:15.331311  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:15.331319  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:15.331406  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:15.395175  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:15.395223  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:15.395230  387237 cri.go:89] found id: ""
	I1027 22:56:15.395241  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:15.395317  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:15.402787  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:15.408636  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:15.408733  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:15.475081  387237 cri.go:89] found id: ""
	I1027 22:56:15.475121  387237 logs.go:282] 0 containers: []
	W1027 22:56:15.475134  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:15.475144  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:15.475228  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:15.539422  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:15.539454  387237 cri.go:89] found id: ""
	I1027 22:56:15.539465  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:15.539539  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:15.545679  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:15.545797  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:15.611191  387237 cri.go:89] found id: ""
	I1027 22:56:15.611229  387237 logs.go:282] 0 containers: []
	W1027 22:56:15.611241  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:15.611250  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:15.611338  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:15.680556  387237 cri.go:89] found id: ""
	I1027 22:56:15.680598  387237 logs.go:282] 0 containers: []
	W1027 22:56:15.680610  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:15.680633  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:15.680648  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:15.765766  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:15.765810  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:15.818602  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:15.818648  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:15.836237  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:15.836278  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:15.898079  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:15.898132  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:15.945267  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:15.945307  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:16.260329  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:16.260410  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:16.317048  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:16.317097  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:16.441386  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:16.441438  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:16.545998  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:16.546027  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:16.546043  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:19.131977  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:19.156294  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:19.156377  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:19.212528  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:19.212559  387237 cri.go:89] found id: ""
	I1027 22:56:19.212569  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:19.212638  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:19.219940  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:19.220031  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:19.270807  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:19.270836  387237 cri.go:89] found id: ""
	I1027 22:56:19.270845  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:19.270951  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:19.276460  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:19.276539  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:19.326816  387237 cri.go:89] found id: ""
	I1027 22:56:19.326851  387237 logs.go:282] 0 containers: []
	W1027 22:56:19.326860  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:19.326867  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:19.326941  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:19.383846  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:19.383874  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:19.383878  387237 cri.go:89] found id: ""
	I1027 22:56:19.383903  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:19.383976  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:19.390009  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:19.396017  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:19.396102  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:19.442908  387237 cri.go:89] found id: ""
	I1027 22:56:19.442945  387237 logs.go:282] 0 containers: []
	W1027 22:56:19.442957  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:19.442965  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:19.443041  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:19.494428  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:19.494465  387237 cri.go:89] found id: ""
	I1027 22:56:19.494477  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:19.494566  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:19.499918  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:19.500001  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:19.560930  387237 cri.go:89] found id: ""
	I1027 22:56:19.560972  387237 logs.go:282] 0 containers: []
	W1027 22:56:19.560983  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:19.560991  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:19.561070  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:19.608124  387237 cri.go:89] found id: ""
	I1027 22:56:19.608162  387237 logs.go:282] 0 containers: []
	W1027 22:56:19.608170  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:19.608192  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:19.608209  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:19.716759  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:19.716801  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:19.807296  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:19.807332  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:19.807350  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:19.851680  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:19.851722  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:19.895493  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:19.895529  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:20.198229  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:20.198278  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:20.253740  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:20.253789  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:20.273048  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:20.273086  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:20.351474  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:20.351534  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:20.420396  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:20.420448  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:23.012996  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:23.041058  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:23.041138  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:23.105527  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:23.105562  387237 cri.go:89] found id: ""
	I1027 22:56:23.105584  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:23.105661  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:23.112917  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:23.113018  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:23.177616  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:23.177645  387237 cri.go:89] found id: ""
	I1027 22:56:23.177665  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:23.177737  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:23.183848  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:23.183971  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:23.231216  387237 cri.go:89] found id: ""
	I1027 22:56:23.231254  387237 logs.go:282] 0 containers: []
	W1027 22:56:23.231266  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:23.231275  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:23.231353  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:23.282091  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:23.282123  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:23.282129  387237 cri.go:89] found id: ""
	I1027 22:56:23.282140  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:23.282215  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:23.289040  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:23.295474  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:23.295574  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:23.355000  387237 cri.go:89] found id: ""
	I1027 22:56:23.355031  387237 logs.go:282] 0 containers: []
	W1027 22:56:23.355042  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:23.355049  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:23.355120  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:23.413383  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:23.413423  387237 cri.go:89] found id: ""
	I1027 22:56:23.413435  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:23.413509  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:23.419599  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:23.419690  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:23.471144  387237 cri.go:89] found id: ""
	I1027 22:56:23.471191  387237 logs.go:282] 0 containers: []
	W1027 22:56:23.471203  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:23.471213  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:23.471292  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:23.525527  387237 cri.go:89] found id: ""
	I1027 22:56:23.525566  387237 logs.go:282] 0 containers: []
	W1027 22:56:23.525577  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:23.525601  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:23.525619  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:23.549147  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:23.549184  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:23.643935  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:23.643963  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:23.643979  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:23.722996  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:23.723044  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:23.830683  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:23.830744  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:23.887360  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:23.887408  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:24.175751  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:24.175817  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:24.263927  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:24.263987  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:24.328260  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:24.328320  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:24.407463  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:24.407520  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:27.072721  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:27.098342  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:27.098417  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:27.148594  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:27.148629  387237 cri.go:89] found id: ""
	I1027 22:56:27.148641  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:27.148717  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:27.154047  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:27.154140  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:27.200417  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:27.200459  387237 cri.go:89] found id: ""
	I1027 22:56:27.200472  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:27.200542  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:27.206022  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:27.206112  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:27.249195  387237 cri.go:89] found id: ""
	I1027 22:56:27.249230  387237 logs.go:282] 0 containers: []
	W1027 22:56:27.249241  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:27.249249  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:27.249327  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:27.291457  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:27.291492  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:27.291500  387237 cri.go:89] found id: ""
	I1027 22:56:27.291513  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:27.291629  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:27.296790  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:27.301906  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:27.301994  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:27.344355  387237 cri.go:89] found id: ""
	I1027 22:56:27.344391  387237 logs.go:282] 0 containers: []
	W1027 22:56:27.344401  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:27.344409  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:27.344477  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:27.389624  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:27.389656  387237 cri.go:89] found id: ""
	I1027 22:56:27.389669  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:27.389742  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:27.395064  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:27.395166  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:27.437595  387237 cri.go:89] found id: ""
	I1027 22:56:27.437632  387237 logs.go:282] 0 containers: []
	W1027 22:56:27.437644  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:27.437653  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:27.437722  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:27.484872  387237 cri.go:89] found id: ""
	I1027 22:56:27.484926  387237 logs.go:282] 0 containers: []
	W1027 22:56:27.484937  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:27.484958  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:27.484972  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:27.589451  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:27.589497  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:27.611724  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:27.611772  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:27.699040  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:27.699073  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:27.699092  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:27.772251  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:27.772299  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:27.869794  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:27.869843  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:27.922823  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:27.922913  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:27.975372  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:27.975409  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:28.044064  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:28.044107  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:28.326282  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:28.326329  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:30.883292  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:30.908361  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:30.908449  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:30.958106  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:30.958140  387237 cri.go:89] found id: ""
	I1027 22:56:30.958153  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:30.958222  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:30.963156  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:30.963230  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:31.014455  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:31.014484  387237 cri.go:89] found id: ""
	I1027 22:56:31.014497  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:31.014579  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:31.020510  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:31.020605  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:31.072548  387237 cri.go:89] found id: ""
	I1027 22:56:31.072583  387237 logs.go:282] 0 containers: []
	W1027 22:56:31.072593  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:31.072601  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:31.072673  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:31.127573  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:31.127610  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:31.127617  387237 cri.go:89] found id: ""
	I1027 22:56:31.127628  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:31.127703  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:31.134822  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:31.141615  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:31.141724  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:31.200181  387237 cri.go:89] found id: ""
	I1027 22:56:31.200218  387237 logs.go:282] 0 containers: []
	W1027 22:56:31.200229  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:31.200239  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:31.200315  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:31.254157  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:31.254188  387237 cri.go:89] found id: ""
	I1027 22:56:31.254201  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:31.254266  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:31.261321  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:31.261421  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:31.314436  387237 cri.go:89] found id: ""
	I1027 22:56:31.314483  387237 logs.go:282] 0 containers: []
	W1027 22:56:31.314496  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:31.314511  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:31.314586  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:31.375526  387237 cri.go:89] found id: ""
	I1027 22:56:31.375564  387237 logs.go:282] 0 containers: []
	W1027 22:56:31.375576  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:31.375598  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:31.375618  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:31.452637  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:31.452685  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:31.501786  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:31.501822  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:31.555881  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:31.555945  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:31.619434  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:31.619484  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:31.708397  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:31.708437  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:31.708453  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:31.789244  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:31.789291  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:32.079293  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:32.079344  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:32.224364  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:32.224411  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:32.250167  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:32.250218  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:34.845228  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:34.875761  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:34.875860  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:34.943397  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:34.943430  387237 cri.go:89] found id: ""
	I1027 22:56:34.943443  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:34.943517  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:34.951935  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:34.952026  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:35.021698  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:35.021805  387237 cri.go:89] found id: ""
	I1027 22:56:35.021829  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:35.021947  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:35.030386  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:35.030482  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:35.084260  387237 cri.go:89] found id: ""
	I1027 22:56:35.084290  387237 logs.go:282] 0 containers: []
	W1027 22:56:35.084300  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:35.084308  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:35.084377  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:35.128971  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:35.129001  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:35.129006  387237 cri.go:89] found id: ""
	I1027 22:56:35.129014  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:35.129083  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:35.134386  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:35.139504  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:35.139598  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:35.190481  387237 cri.go:89] found id: ""
	I1027 22:56:35.190517  387237 logs.go:282] 0 containers: []
	W1027 22:56:35.190527  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:35.190535  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:35.190621  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:35.243918  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:35.243960  387237 cri.go:89] found id: ""
	I1027 22:56:35.243973  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:35.244054  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:35.250778  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:35.250878  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:35.304189  387237 cri.go:89] found id: ""
	I1027 22:56:35.304228  387237 logs.go:282] 0 containers: []
	W1027 22:56:35.304240  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:35.304248  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:35.304322  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:35.371030  387237 cri.go:89] found id: ""
	I1027 22:56:35.371071  387237 logs.go:282] 0 containers: []
	W1027 22:56:35.371084  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:35.371109  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:35.371129  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:35.449410  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:35.449460  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:35.576765  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:35.576807  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:35.598544  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:35.598586  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:35.679676  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:35.679729  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:35.759081  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:35.759118  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:35.823559  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:35.823645  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:35.947493  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:35.947529  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:35.947550  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:36.029674  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:36.029735  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:36.111268  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:36.111340  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:38.901937  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:38.926752  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:38.926838  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:38.980137  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:38.980170  387237 cri.go:89] found id: ""
	I1027 22:56:38.980182  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:38.980252  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:38.986227  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:38.986330  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:39.043677  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:39.043707  387237 cri.go:89] found id: ""
	I1027 22:56:39.043720  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:39.043792  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:39.049444  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:39.049537  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:39.111121  387237 cri.go:89] found id: ""
	I1027 22:56:39.111154  387237 logs.go:282] 0 containers: []
	W1027 22:56:39.111165  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:39.111173  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:39.111240  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:39.173759  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:39.173795  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:39.173803  387237 cri.go:89] found id: ""
	I1027 22:56:39.173816  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:39.173931  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:39.181489  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:39.188517  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:39.188603  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:39.246060  387237 cri.go:89] found id: ""
	I1027 22:56:39.246098  387237 logs.go:282] 0 containers: []
	W1027 22:56:39.246110  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:39.246118  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:39.246194  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:39.291219  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:39.291251  387237 cri.go:89] found id: ""
	I1027 22:56:39.291265  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:39.291331  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:39.296793  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:39.296882  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:39.349747  387237 cri.go:89] found id: ""
	I1027 22:56:39.349783  387237 logs.go:282] 0 containers: []
	W1027 22:56:39.349796  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:39.349806  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:39.349911  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:39.392974  387237 cri.go:89] found id: ""
	I1027 22:56:39.393011  387237 logs.go:282] 0 containers: []
	W1027 22:56:39.393022  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:39.393046  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:39.393064  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:39.669060  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:39.669103  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:39.731626  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:39.731694  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:39.755372  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:39.755419  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:39.829108  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:39.829153  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:39.887220  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:39.887270  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:40.017308  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:40.017366  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:40.111248  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:40.111289  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:40.111311  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:40.191551  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:40.191595  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:40.273915  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:40.273965  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:42.830091  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:42.849839  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:42.849939  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:42.911591  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:42.911629  387237 cri.go:89] found id: ""
	I1027 22:56:42.911643  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:42.911726  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:42.918373  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:42.918463  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:42.974995  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:42.975038  387237 cri.go:89] found id: ""
	I1027 22:56:42.975051  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:42.975120  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:42.981590  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:42.981682  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:43.034691  387237 cri.go:89] found id: ""
	I1027 22:56:43.034739  387237 logs.go:282] 0 containers: []
	W1027 22:56:43.034750  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:43.034759  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:43.034837  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:43.086728  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:43.086756  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:43.086761  387237 cri.go:89] found id: ""
	I1027 22:56:43.086771  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:43.086836  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:43.093312  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:43.098623  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:43.098707  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:43.160234  387237 cri.go:89] found id: ""
	I1027 22:56:43.160273  387237 logs.go:282] 0 containers: []
	W1027 22:56:43.160285  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:43.160293  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:43.160367  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:43.210313  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:43.210349  387237 cri.go:89] found id: ""
	I1027 22:56:43.210361  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:43.210460  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:43.217218  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:43.217313  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:43.277863  387237 cri.go:89] found id: ""
	I1027 22:56:43.277914  387237 logs.go:282] 0 containers: []
	W1027 22:56:43.277929  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:43.277939  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:43.278008  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:43.327724  387237 cri.go:89] found id: ""
	I1027 22:56:43.327761  387237 logs.go:282] 0 containers: []
	W1027 22:56:43.327774  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:43.327797  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:43.327816  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:43.434071  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:43.434116  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:43.518164  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:43.518195  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:43.518210  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:43.574685  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:43.574731  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:43.654767  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
E1027 23:05:34.993337  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
	I1027 22:56:43.654813  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:43.700676  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:43.700721  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:43.970772  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:43.970823  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:44.017065  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:44.017103  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:44.033377  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:44.033411  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:44.109350  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:44.109395  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:46.667000  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:46.695486  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:46.695584  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:46.750953  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:46.750985  387237 cri.go:89] found id: ""
	I1027 22:56:46.750998  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:46.751075  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:46.758212  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:46.758287  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:46.820023  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:46.820058  387237 cri.go:89] found id: ""
	I1027 22:56:46.820072  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:46.820154  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:46.827580  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:46.827700  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:46.894663  387237 cri.go:89] found id: ""
	I1027 22:56:46.894699  387237 logs.go:282] 0 containers: []
	W1027 22:56:46.894711  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:46.894719  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:46.894803  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:46.959453  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:46.959482  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:46.959489  387237 cri.go:89] found id: ""
	I1027 22:56:46.959500  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:46.959569  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:46.967288  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:46.975512  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:46.975603  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:47.027008  387237 cri.go:89] found id: ""
	I1027 22:56:47.027046  387237 logs.go:282] 0 containers: []
	W1027 22:56:47.027057  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:47.027066  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:47.027142  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:47.081534  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:47.081573  387237 cri.go:89] found id: ""
	I1027 22:56:47.081582  387237 logs.go:282] 1 containers: [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:47.081644  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:47.088643  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:47.088758  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:47.150740  387237 cri.go:89] found id: ""
	I1027 22:56:47.150774  387237 logs.go:282] 0 containers: []
	W1027 22:56:47.150787  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:47.150796  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:47.150875  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:47.211473  387237 cri.go:89] found id: ""
	I1027 22:56:47.211520  387237 logs.go:282] 0 containers: []
	W1027 22:56:47.211531  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:47.211556  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:47.211573  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:47.277513  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:47.277571  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:47.393480  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:47.393528  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:47.416040  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:47.416081  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:47.511200  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:47.511237  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:47.511253  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:47.594560  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:47.594623  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:47.684332  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:47.684397  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:47.746900  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:47.746948  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:47.848072  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:47.848117  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:47.904986  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:47.905019  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:50.717215  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:50.750316  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:50.750405  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:50.823644  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:50.823681  387237 cri.go:89] found id: ""
	I1027 22:56:50.823695  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:50.823797  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:50.832544  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:50.832761  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:50.904596  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:50.904623  387237 cri.go:89] found id: ""
	I1027 22:56:50.904634  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:50.904713  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:50.912996  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:50.913092  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:50.997387  387237 cri.go:89] found id: ""
	I1027 22:56:50.997429  387237 logs.go:282] 0 containers: []
	W1027 22:56:50.997441  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:50.997448  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:50.997511  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:51.076141  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:51.076172  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:51.076178  387237 cri.go:89] found id: ""
	I1027 22:56:51.076188  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:51.076259  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:51.086395  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:51.094682  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:51.094836  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:51.171917  387237 cri.go:89] found id: ""
	I1027 22:56:51.171958  387237 logs.go:282] 0 containers: []
	W1027 22:56:51.171970  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:51.171980  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:51.172056  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:51.242725  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:51.242830  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:51.242862  387237 cri.go:89] found id: ""
	I1027 22:56:51.242899  387237 logs.go:282] 2 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:51.242989  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:51.251089  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:51.258528  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:51.258609  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:51.324339  387237 cri.go:89] found id: ""
	I1027 22:56:51.324379  387237 logs.go:282] 0 containers: []
	W1027 22:56:51.324452  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:51.324464  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:51.324558  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:51.396311  387237 cri.go:89] found id: ""
	I1027 22:56:51.396348  387237 logs.go:282] 0 containers: []
	W1027 22:56:51.396359  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:51.396373  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:51.396390  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:51.466389  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:51.466441  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:51.787210  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:51.787266  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:51.847584  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:51.847619  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:51.870015  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:51.870048  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:51.959762  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:51.959797  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:51.959825  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:52.018053  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:52.018094  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:52.073946  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:56:52.073991  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:52.133047  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:52.133082  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:52.245552  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:52.245613  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:52.347637  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:52.347691  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:54.932746  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:54.953232  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:54.953319  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:55.004137  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:55.004167  387237 cri.go:89] found id: ""
	I1027 22:56:55.004178  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:55.004248  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.010578  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:55.010699  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:55.068579  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:55.068610  387237 cri.go:89] found id: ""
	I1027 22:56:55.068643  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:55.068719  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.074277  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:55.074370  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:55.120576  387237 cri.go:89] found id: ""
	I1027 22:56:55.120611  387237 logs.go:282] 0 containers: []
	W1027 22:56:55.120623  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:55.120631  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:55.120711  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:55.165649  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:55.165680  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:55.165687  387237 cri.go:89] found id: ""
	I1027 22:56:55.165700  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:55.165774  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.171592  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.176949  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:55.177046  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:55.223432  387237 cri.go:89] found id: ""
	I1027 22:56:55.223476  387237 logs.go:282] 0 containers: []
	W1027 22:56:55.223490  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:55.223500  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:55.223597  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:55.279143  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:55.279175  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:55.279181  387237 cri.go:89] found id: ""
	I1027 22:56:55.279192  387237 logs.go:282] 2 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:55.279264  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.284365  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:55.289424  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:55.289519  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:55.333821  387237 cri.go:89] found id: ""
	I1027 22:56:55.333864  387237 logs.go:282] 0 containers: []
	W1027 22:56:55.333904  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:55.333916  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:55.334001  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:55.401839  387237 cri.go:89] found id: ""
	I1027 22:56:55.401877  387237 logs.go:282] 0 containers: []
	W1027 22:56:55.401908  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:55.401922  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:56:55.401941  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:56:55.425161  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:56:55.425203  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:56:55.516199  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:56:55.516226  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:55.516244  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:55.592092  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:55.592144  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:55.643236  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:56:55.643283  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:55.695157  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:56:55.695194  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:55.747257  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:55.747295  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:56:56.035058  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:56:56.035105  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:56:56.173289  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:56.173331  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:56.233228  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:56.233269  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:56.315859  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:56:56.315908  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:56:58.867770  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:56:58.892165  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:56:58.892239  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:56:58.942363  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:58.942391  387237 cri.go:89] found id: ""
	I1027 22:56:58.942403  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:56:58.942473  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:58.949532  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:56:58.949617  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:56:59.005661  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:59.005690  387237 cri.go:89] found id: ""
	I1027 22:56:59.005702  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:56:59.005774  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:59.012509  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:56:59.012602  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:56:59.075949  387237 cri.go:89] found id: ""
	I1027 22:56:59.075987  387237 logs.go:282] 0 containers: []
	W1027 22:56:59.075998  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:56:59.076007  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:56:59.076087  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:56:59.149951  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:59.149984  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:59.149989  387237 cri.go:89] found id: ""
	I1027 22:56:59.150000  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:56:59.150078  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:59.159576  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:59.172559  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:56:59.172641  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:56:59.248414  387237 cri.go:89] found id: ""
	I1027 22:56:59.248461  387237 logs.go:282] 0 containers: []
	W1027 22:56:59.248473  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:56:59.248485  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:56:59.248562  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:56:59.335350  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:59.335377  387237 cri.go:89] found id: "500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:56:59.335382  387237 cri.go:89] found id: ""
	I1027 22:56:59.335392  387237 logs.go:282] 2 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201]
	I1027 22:56:59.335460  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:59.346026  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:56:59.359392  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:56:59.359535  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:56:59.428414  387237 cri.go:89] found id: ""
	I1027 22:56:59.428477  387237 logs.go:282] 0 containers: []
	W1027 22:56:59.428492  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:56:59.428501  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:56:59.428712  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:56:59.493273  387237 cri.go:89] found id: ""
	I1027 22:56:59.493300  387237 logs.go:282] 0 containers: []
	W1027 22:56:59.493307  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:56:59.493317  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:56:59.493329  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:56:59.606266  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:56:59.606318  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:56:59.700944  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:56:59.700995  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:56:59.829126  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:56:59.829199  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:56:59.901422  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:56:59.901470  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:56:59.969585  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:56:59.969637  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:00.403479  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:00.403600  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:00.581635  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:00.581706  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:00.628240  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:00.628293  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:00.759543  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:00.759581  387237 logs.go:123] Gathering logs for kube-controller-manager [500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201] ...
	I1027 22:57:00.759605  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500c4ab59b212c650dd80e3894d36d659807dab96519536da53ef6077a5a4201"
	I1027 22:57:00.820911  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:00.820957  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:03.402286  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:03.432371  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:57:03.432462  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:57:03.484580  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:03.484616  387237 cri.go:89] found id: ""
	I1027 22:57:03.484628  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:57:03.484705  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:03.490606  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:57:03.490716  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:57:03.537718  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:03.537756  387237 cri.go:89] found id: ""
	I1027 22:57:03.537769  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:57:03.537841  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:03.545343  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:57:03.545442  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:57:03.603097  387237 cri.go:89] found id: ""
	I1027 22:57:03.603142  387237 logs.go:282] 0 containers: []
	W1027 22:57:03.603154  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:57:03.603162  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:57:03.603244  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:57:03.656187  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:03.656223  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:03.656229  387237 cri.go:89] found id: ""
	I1027 22:57:03.656241  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:57:03.656321  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:03.663197  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:03.670199  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:57:03.670303  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:57:03.719383  387237 cri.go:89] found id: ""
	I1027 22:57:03.719412  387237 logs.go:282] 0 containers: []
	W1027 22:57:03.719420  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:57:03.719427  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:57:03.719504  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:57:03.775369  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:03.775408  387237 cri.go:89] found id: ""
	I1027 22:57:03.775421  387237 logs.go:282] 1 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1]
	I1027 22:57:03.775493  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:03.784097  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:57:03.784189  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:57:03.845989  387237 cri.go:89] found id: ""
	I1027 22:57:03.846028  387237 logs.go:282] 0 containers: []
	W1027 22:57:03.846037  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:57:03.846043  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:57:03.846113  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:57:03.891247  387237 cri.go:89] found id: ""
	I1027 22:57:03.891282  387237 logs.go:282] 0 containers: []
	W1027 22:57:03.891290  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:57:03.891308  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:03.891323  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:04.005824  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:04.005871  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:04.095945  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:04.095980  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:57:04.096001  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:04.180143  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:57:04.180193  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:04.234572  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:57:04.234705  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:04.280382  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:57:04.280439  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:04.564737  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:04.564784  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:04.642151  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:04.642208  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:04.663590  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:57:04.663629  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:04.756520  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:57:04.756571  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:07.319037  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:07.342159  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:57:07.342243  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:57:07.394380  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:07.394414  387237 cri.go:89] found id: ""
	I1027 22:57:07.394426  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:57:07.394493  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:07.400184  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:57:07.400280  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:57:07.453493  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:07.453525  387237 cri.go:89] found id: ""
	I1027 22:57:07.453537  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:57:07.453604  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:07.459015  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:57:07.459087  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:57:07.503978  387237 cri.go:89] found id: ""
	I1027 22:57:07.504009  387237 logs.go:282] 0 containers: []
	W1027 22:57:07.504028  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:57:07.504034  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:57:07.504093  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:57:07.552397  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:07.552427  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:07.552432  387237 cri.go:89] found id: ""
	I1027 22:57:07.552442  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:57:07.552513  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:07.557785  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:07.563011  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:57:07.563100  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:57:07.612853  387237 cri.go:89] found id: ""
	I1027 22:57:07.612902  387237 logs.go:282] 0 containers: []
	W1027 22:57:07.612916  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:57:07.612926  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:57:07.613005  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:57:07.664477  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:07.664506  387237 cri.go:89] found id: ""
	I1027 22:57:07.664515  387237 logs.go:282] 1 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1]
	I1027 22:57:07.664576  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:07.669985  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:57:07.670059  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:57:07.714586  387237 cri.go:89] found id: ""
	I1027 22:57:07.714622  387237 logs.go:282] 0 containers: []
	W1027 22:57:07.714639  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:57:07.714646  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:57:07.714715  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:57:07.759488  387237 cri.go:89] found id: ""
	I1027 22:57:07.759518  387237 logs.go:282] 0 containers: []
	W1027 22:57:07.759527  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:57:07.759546  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:07.759569  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:07.806331  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:07.806366  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:07.928021  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:57:07.928074  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:07.987794  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:57:07.987843  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:08.063151  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:57:08.063191  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:08.361111  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:08.361162  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:08.379016  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:08.379075  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:08.463754  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:08.463788  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:57:08.463808  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:08.537613  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:57:08.537657  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:08.582808  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:57:08.582843  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:11.132343  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:11.160989  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:57:11.161071  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:57:11.208626  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:11.208649  387237 cri.go:89] found id: ""
	I1027 22:57:11.208660  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:57:11.208723  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:11.215820  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:57:11.215958  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:57:11.270266  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:11.270297  387237 cri.go:89] found id: ""
	I1027 22:57:11.270307  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:57:11.270377  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:11.275740  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:57:11.275814  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:57:11.327961  387237 cri.go:89] found id: ""
	I1027 22:57:11.327996  387237 logs.go:282] 0 containers: []
	W1027 22:57:11.328007  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:57:11.328015  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:57:11.328080  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:57:11.382476  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:11.382517  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:11.382523  387237 cri.go:89] found id: ""
	I1027 22:57:11.382535  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:57:11.382613  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:11.388314  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:11.394164  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:57:11.394240  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:57:11.464329  387237 cri.go:89] found id: ""
	I1027 22:57:11.464377  387237 logs.go:282] 0 containers: []
	W1027 22:57:11.464388  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:57:11.464397  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:57:11.464466  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:57:11.527130  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:11.527153  387237 cri.go:89] found id: ""
	I1027 22:57:11.527163  387237 logs.go:282] 1 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1]
	I1027 22:57:11.527226  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:11.533446  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:57:11.533574  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:57:11.592317  387237 cri.go:89] found id: ""
	I1027 22:57:11.592357  387237 logs.go:282] 0 containers: []
	W1027 22:57:11.592369  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:57:11.592378  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:57:11.592445  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:57:11.648990  387237 cri.go:89] found id: ""
	I1027 22:57:11.649021  387237 logs.go:282] 0 containers: []
	W1027 22:57:11.649036  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:57:11.649058  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:11.649073  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:11.771155  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:57:11.771190  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:11.841583  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:57:11.841617  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:11.921586  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:57:11.921627  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:11.962818  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:57:11.962855  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:12.010370  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:12.010405  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:12.071341  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:12.071376  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:12.090617  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:12.090647  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:12.172595  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:12.172654  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:57:12.172670  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:12.263523  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:57:12.263570  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:15.033030  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:15.057138  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:57:15.057240  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:57:15.110180  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:15.110210  387237 cri.go:89] found id: ""
	I1027 22:57:15.110222  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:57:15.110294  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:15.116857  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:57:15.116965  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:57:15.164948  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:15.164980  387237 cri.go:89] found id: ""
	I1027 22:57:15.164993  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:57:15.165066  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:15.170769  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:57:15.170864  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:57:15.217293  387237 cri.go:89] found id: ""
	I1027 22:57:15.217321  387237 logs.go:282] 0 containers: []
	W1027 22:57:15.217328  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:57:15.217335  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:57:15.217400  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:57:15.273135  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:15.273168  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:15.273174  387237 cri.go:89] found id: ""
	I1027 22:57:15.273186  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:57:15.273262  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:15.280435  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:15.285773  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:57:15.285865  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:57:15.328103  387237 cri.go:89] found id: ""
	I1027 22:57:15.328135  387237 logs.go:282] 0 containers: []
	W1027 22:57:15.328145  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:57:15.328153  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:57:15.328227  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:57:15.378519  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:15.378544  387237 cri.go:89] found id: ""
	I1027 22:57:15.378554  387237 logs.go:282] 1 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1]
	I1027 22:57:15.378626  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:15.383967  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:57:15.384047  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:57:15.431308  387237 cri.go:89] found id: ""
	I1027 22:57:15.431355  387237 logs.go:282] 0 containers: []
	W1027 22:57:15.431366  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:57:15.431382  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:57:15.431464  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:57:15.481653  387237 cri.go:89] found id: ""
	I1027 22:57:15.481728  387237 logs.go:282] 0 containers: []
	W1027 22:57:15.481739  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:57:15.481772  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:15.481792  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:15.588419  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:15.588465  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:15.608390  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:57:15.608426  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:15.670539  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:57:15.670579  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:15.743710  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:57:15.743759  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:15.787791  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:15.787824  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:15.866220  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:15.866244  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:57:15.866258  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:15.937654  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:57:15.937696  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:15.982589  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:57:15.982634  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:16.248319  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:16.248362  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:18.798997  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:18.821069  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:57:18.821131  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:57:18.870218  387237 cri.go:89] found id: "e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:18.870243  387237 cri.go:89] found id: ""
	I1027 22:57:18.870251  387237 logs.go:282] 1 containers: [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b]
	I1027 22:57:18.870309  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:18.875436  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:57:18.875510  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:57:18.919966  387237 cri.go:89] found id: "f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:18.919998  387237 cri.go:89] found id: ""
	I1027 22:57:18.920009  387237 logs.go:282] 1 containers: [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3]
	I1027 22:57:18.920081  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:18.926960  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:57:18.927053  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:57:18.971260  387237 cri.go:89] found id: ""
	I1027 22:57:18.971296  387237 logs.go:282] 0 containers: []
	W1027 22:57:18.971307  387237 logs.go:284] No container was found matching "coredns"
	I1027 22:57:18.971318  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:57:18.971383  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:57:19.019069  387237 cri.go:89] found id: "250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:19.019089  387237 cri.go:89] found id: "66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:19.019093  387237 cri.go:89] found id: ""
	I1027 22:57:19.019102  387237 logs.go:282] 2 containers: [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3]
	I1027 22:57:19.019166  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:19.025033  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:19.030267  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:57:19.030354  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:57:19.093198  387237 cri.go:89] found id: ""
	I1027 22:57:19.093229  387237 logs.go:282] 0 containers: []
	W1027 22:57:19.093240  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:57:19.093247  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:57:19.093320  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:57:19.152832  387237 cri.go:89] found id: "62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:19.152865  387237 cri.go:89] found id: ""
	I1027 22:57:19.152877  387237 logs.go:282] 1 containers: [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1]
	I1027 22:57:19.152961  387237 ssh_runner.go:195] Run: which crictl
	I1027 22:57:19.159151  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:57:19.159236  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:57:19.213157  387237 cri.go:89] found id: ""
	I1027 22:57:19.213189  387237 logs.go:282] 0 containers: []
	W1027 22:57:19.213200  387237 logs.go:284] No container was found matching "kindnet"
	I1027 22:57:19.213208  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:57:19.213273  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:57:19.266683  387237 cri.go:89] found id: ""
	I1027 22:57:19.266713  387237 logs.go:282] 0 containers: []
	W1027 22:57:19.266722  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:57:19.266742  387237 logs.go:123] Gathering logs for kube-apiserver [e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b] ...
	I1027 22:57:19.266759  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7a165cbfbaa832768c785bf237a1f9c9bcaaf69c5f6c4ced7aa4c7eba4f534b"
	I1027 22:57:19.381723  387237 logs.go:123] Gathering logs for etcd [f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3] ...
	I1027 22:57:19.381770  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6603df9bcf468077eba7b4491c5aa2c01b161d3d478870878c7c764e346d7b3"
	I1027 22:57:19.443265  387237 logs.go:123] Gathering logs for kube-scheduler [250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093] ...
	I1027 22:57:19.443308  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 250d24f34bc6290037a80ad64ad670d622c2e0c28b6a5b8ed1e85689b0615093"
	I1027 22:57:19.545416  387237 logs.go:123] Gathering logs for kube-scheduler [66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3] ...
	I1027 22:57:19.545481  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66f432db301d907e0da5ec5930fa8040dfa1ab3efb2d34e20932ac8478b3cbe3"
	I1027 22:57:19.593098  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 22:57:19.593154  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:57:19.738335  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:57:19.738381  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:57:19.845582  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:57:19.845614  387237 logs.go:123] Gathering logs for kube-controller-manager [62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1] ...
	I1027 22:57:19.845635  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62ca8e8948dd395a5b8da4f009583106162b68e955f124559db006cd4ea967c1"
	I1027 22:57:19.894869  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:57:19.894935  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:57:20.203773  387237 logs.go:123] Gathering logs for container status ...
	I1027 22:57:20.203814  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:57:20.258460  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 22:57:20.258500  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:57:22.777241  387237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:57:22.804505  387237 kubeadm.go:602] duration metric: took 4m2.707084855s to restartPrimaryControlPlane
	W1027 22:57:22.804630  387237 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1027 22:57:22.804712  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1027 22:57:25.227251  387237 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.422497083s)
	I1027 22:57:25.227351  387237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:57:25.254277  387237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:57:25.273929  387237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:57:25.287183  387237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:57:25.287207  387237 kubeadm.go:158] found existing configuration files:
	
	I1027 22:57:25.287269  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:57:25.300193  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:57:25.300286  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:57:25.314521  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:57:25.328621  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:57:25.328707  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:57:25.343102  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:57:25.361398  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:57:25.361477  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:57:25.376389  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:57:25.395629  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:57:25.395726  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:57:25.411359  387237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 22:57:25.592452  387237 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:01:28.680381  387237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1027 23:01:28.680509  387237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1027 23:01:28.682949  387237 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:01:28.683027  387237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:01:28.683176  387237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:01:28.683350  387237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:01:28.683495  387237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:01:28.683554  387237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:01:28.685494  387237 out.go:252]   - Generating certificates and keys ...
	I1027 23:01:28.685585  387237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:01:28.685660  387237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:01:28.685756  387237 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1027 23:01:28.685835  387237 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1027 23:01:28.685964  387237 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1027 23:01:28.686042  387237 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1027 23:01:28.686129  387237 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1027 23:01:28.686211  387237 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1027 23:01:28.686342  387237 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1027 23:01:28.686489  387237 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1027 23:01:28.686557  387237 kubeadm.go:319] [certs] Using the existing "sa" key
	I1027 23:01:28.686652  387237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:01:28.686739  387237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:01:28.686818  387237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:01:28.686918  387237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:01:28.687026  387237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:01:28.687107  387237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:01:28.687221  387237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:01:28.687301  387237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:01:28.688754  387237 out.go:252]   - Booting up control plane ...
	I1027 23:01:28.688863  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:01:28.689018  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:01:28.689121  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:01:28.689233  387237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:01:28.689389  387237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:01:28.689557  387237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:01:28.689648  387237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:01:28.689727  387237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:01:28.689947  387237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:01:28.690097  387237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:01:28.690195  387237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001212163s
	I1027 23:01:28.690328  387237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:01:28.690413  387237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	I1027 23:01:28.690547  387237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:01:28.690642  387237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:01:28.690761  387237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.702216361s
	I1027 23:01:28.690820  387237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.146975083s
	I1027 23:01:28.690881  387237 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.00042765s
	I1027 23:01:28.690903  387237 kubeadm.go:319] 
	I1027 23:01:28.691029  387237 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1027 23:01:28.691104  387237 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1027 23:01:28.691197  387237 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1027 23:01:28.691328  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1027 23:01:28.691457  387237 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1027 23:01:28.691572  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1027 23:01:28.691608  387237 kubeadm.go:319] 
	W1027 23:01:28.691742  387237 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001212163s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.702216361s
	[control-plane-check] kube-scheduler is healthy after 2.146975083s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00042765s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001212163s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.702216361s
	[control-plane-check] kube-scheduler is healthy after 2.146975083s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00042765s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	To see the stack trace of this error execute with --v=5 or higher
	
	I1027 23:01:28.691821  387237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1027 23:01:30.210559  387237 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.518710519s)
	I1027 23:01:30.210659  387237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:01:30.236763  387237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:01:30.250995  387237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:01:30.251017  387237 kubeadm.go:158] found existing configuration files:
	
	I1027 23:01:30.251066  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:01:30.267611  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:01:30.267702  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:01:30.290131  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:01:30.312265  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:01:30.312323  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:01:30.328695  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:01:30.344846  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:01:30.344939  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:01:30.359158  387237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:01:30.372094  387237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:01:30.372158  387237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:01:30.388344  387237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 23:01:30.449110  387237 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:01:30.449197  387237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:01:30.571554  387237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:01:30.571718  387237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:01:30.571860  387237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:01:30.584299  387237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:01:30.587303  387237 out.go:252]   - Generating certificates and keys ...
	I1027 23:01:30.587441  387237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:01:30.587569  387237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:01:30.587673  387237 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1027 23:01:30.587757  387237 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1027 23:01:30.587866  387237 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1027 23:01:30.587990  387237 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1027 23:01:30.588102  387237 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1027 23:01:30.588208  387237 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1027 23:01:30.588320  387237 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1027 23:01:30.588431  387237 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1027 23:01:30.588488  387237 kubeadm.go:319] [certs] Using the existing "sa" key
	I1027 23:01:30.588571  387237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:01:30.855360  387237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:01:30.972557  387237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:01:31.141805  387237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:01:31.391069  387237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:01:31.851348  387237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:01:31.851949  387237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:01:31.854755  387237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:01:31.858360  387237 out.go:252]   - Booting up control plane ...
	I1027 23:01:31.858503  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:01:31.858630  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:01:31.858715  387237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:01:31.889920  387237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:01:31.890075  387237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:01:31.899861  387237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:01:31.900489  387237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:01:31.900562  387237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:01:32.132538  387237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:01:32.132724  387237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:01:33.636496  387237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.503607147s
	I1027 23:01:33.646083  387237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:01:33.646206  387237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	I1027 23:01:33.646317  387237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:01:33.646419  387237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:01:36.221369  387237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.57478969s
	I1027 23:01:36.565827  387237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.92004853s
	I1027 23:05:33.651551  387237 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	I1027 23:05:33.651601  387237 kubeadm.go:319] 
	I1027 23:05:33.651725  387237 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1027 23:05:33.651825  387237 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1027 23:05:33.651942  387237 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1027 23:05:33.652025  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1027 23:05:33.652145  387237 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1027 23:05:33.652266  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1027 23:05:33.652278  387237 kubeadm.go:319] 
	I1027 23:05:33.653959  387237 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:05:33.654245  387237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	I1027 23:05:33.654352  387237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1027 23:05:33.654424  387237 kubeadm.go:403] duration metric: took 12m13.667806501s to StartCluster
	I1027 23:05:33.654520  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 23:05:33.654613  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 23:05:33.709824  387237 cri.go:89] found id: ""
	I1027 23:05:33.709879  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.709905  387237 logs.go:284] No container was found matching "kube-apiserver"
	I1027 23:05:33.709915  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 23:05:33.710009  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 23:05:33.749859  387237 cri.go:89] found id: ""
	I1027 23:05:33.749909  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.749921  387237 logs.go:284] No container was found matching "etcd"
	I1027 23:05:33.749932  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 23:05:33.749987  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 23:05:33.795987  387237 cri.go:89] found id: ""
	I1027 23:05:33.796025  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.796036  387237 logs.go:284] No container was found matching "coredns"
	I1027 23:05:33.796044  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 23:05:33.796173  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 23:05:33.835683  387237 cri.go:89] found id: "195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf"
	I1027 23:05:33.835714  387237 cri.go:89] found id: ""
	I1027 23:05:33.835726  387237 logs.go:282] 1 containers: [195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf]
	I1027 23:05:33.835792  387237 ssh_runner.go:195] Run: which crictl
	I1027 23:05:33.840923  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 23:05:33.840998  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 23:05:33.881942  387237 cri.go:89] found id: ""
	I1027 23:05:33.881976  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.881984  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 23:05:33.881993  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 23:05:33.882054  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 23:05:33.922601  387237 cri.go:89] found id: "8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55"
	I1027 23:05:33.922634  387237 cri.go:89] found id: ""
	I1027 23:05:33.922644  387237 logs.go:282] 1 containers: [8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55]
	I1027 23:05:33.922705  387237 ssh_runner.go:195] Run: which crictl
	I1027 23:05:33.928300  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 23:05:33.928386  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 23:05:33.968478  387237 cri.go:89] found id: ""
	I1027 23:05:33.968515  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.968525  387237 logs.go:284] No container was found matching "kindnet"
	I1027 23:05:33.968533  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 23:05:33.968607  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 23:05:34.009588  387237 cri.go:89] found id: ""
	I1027 23:05:34.009627  387237 logs.go:282] 0 containers: []
	W1027 23:05:34.009638  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 23:05:34.009653  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 23:05:34.009671  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 23:05:34.124620  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 23:05:34.124667  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 23:05:34.143333  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 23:05:34.143377  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 23:05:34.222766  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 23:05:34.222794  387237 logs.go:123] Gathering logs for kube-scheduler [195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf] ...
	I1027 23:05:34.222810  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf"
	I1027 23:05:34.292301  387237 logs.go:123] Gathering logs for kube-controller-manager [8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55] ...
	I1027 23:05:34.292349  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55"
	I1027 23:05:34.339527  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 23:05:34.339560  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 23:05:34.542379  387237 logs.go:123] Gathering logs for container status ...
	I1027 23:05:34.542428  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 23:05:34.591395  387237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	W1027 23:05:34.591529  387237 out.go:285] * 
	* 
	W1027 23:05:34.591640  387237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[ceE1027 23:05:35.010478  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
rts] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W1027 23:05:34.591665  387237 out.go:285] * 
	* 
	W1027 23:05:34.593550  387237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:05:34.596755  387237 out.go:203] 
	W1027 23:05:34.598076  387237 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W1027 23:05:34.598103  387237 out.go:285] * 
	* 
	I1027 23:05:34.599578  387237 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-216520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-10-27 23:05:35.01371848 +0000 UTC m=+4586.621400461
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-216520 -n kubernetes-upgrade-216520
E1027 23:05:35.035424  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:05:35.076917  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:05:35.158503  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-216520 -n kubernetes-upgrade-216520: exit status 2 (237.122114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-216520 logs -n 25
E1027 23:05:35.320435  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:05:35.642769  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:05:36.284050  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/kindnet-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-561731 sudo journalctl -xeu kubelet --all --full --no-pager          │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /etc/kubernetes/kubelet.conf                         │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p bridge-561731 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;   │ bridge-561731  │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /var/lib/kubelet/config.yaml                         │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p bridge-561731 sudo crio config                                               │ bridge-561731  │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ delete  │ -p bridge-561731                                                                │ bridge-561731  │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo systemctl status docker --all --full --no-pager          │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │                     │
	│ ssh     │ -p flannel-561731 sudo systemctl cat docker --no-pager                          │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /etc/docker/daemon.json                              │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo docker system info                                       │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │                     │
	│ ssh     │ -p flannel-561731 sudo systemctl status cri-docker --all --full --no-pager      │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │                     │
	│ ssh     │ -p flannel-561731 sudo systemctl cat cri-docker --no-pager                      │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │                     │
	│ ssh     │ -p flannel-561731 sudo cat /usr/lib/systemd/system/cri-docker.service           │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cri-dockerd --version                                    │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo systemctl status containerd --all --full --no-pager      │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │                     │
	│ ssh     │ -p flannel-561731 sudo systemctl cat containerd --no-pager                      │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /lib/systemd/system/containerd.service               │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo cat /etc/containerd/config.toml                          │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo containerd config dump                                   │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo systemctl status crio --all --full --no-pager            │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo systemctl cat crio --no-pager                            │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ ssh     │ -p flannel-561731 sudo crio config                                              │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	│ delete  │ -p flannel-561731                                                               │ flannel-561731 │ jenkins │ v1.37.0 │ 27 Oct 25 23:03 UTC │ 27 Oct 25 23:03 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:01:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:01:34.806471  396818 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:01:34.806600  396818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:01:34.806606  396818 out.go:374] Setting ErrFile to fd 2...
	I1027 23:01:34.806613  396818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:01:34.806996  396818 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 23:01:34.807654  396818 out.go:368] Setting JSON to false
	I1027 23:01:34.808996  396818 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9842,"bootTime":1761596253,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 23:01:34.809133  396818 start.go:143] virtualization: kvm guest
	I1027 23:01:34.812990  396818 out.go:179] * [bridge-561731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 23:01:34.817225  396818 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:01:34.817205  396818 notify.go:221] Checking for updates...
	I1027 23:01:34.819856  396818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:01:34.821751  396818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 23:01:34.823174  396818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 23:01:34.824616  396818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 23:01:34.828499  396818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:01:34.830642  396818 config.go:182] Loaded profile config "enable-default-cni-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:01:34.830792  396818 config.go:182] Loaded profile config "flannel-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:01:34.830912  396818 config.go:182] Loaded profile config "guest-734990": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 23:01:34.831025  396818 config.go:182] Loaded profile config "kubernetes-upgrade-216520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:01:34.831145  396818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:01:34.884692  396818 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 23:01:34.886179  396818 start.go:307] selected driver: kvm2
	I1027 23:01:34.886206  396818 start.go:928] validating driver "kvm2" against <nil>
	I1027 23:01:34.886238  396818 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:01:34.887441  396818 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:01:34.887860  396818 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:01:34.887928  396818 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:01:34.887948  396818 start_flags.go:335] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 23:01:34.888030  396818 start.go:351] cluster config:
	{Name:bridge-561731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1027 23:01:34.888179  396818 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:01:34.889884  396818 out.go:179] * Starting "bridge-561731" primary control-plane node in "bridge-561731" cluster
	I1027 23:01:32.498198  395987 main.go:143] libmachine: waiting for domain to start...
	I1027 23:01:32.500082  395987 main.go:143] libmachine: domain is now running
	I1027 23:01:32.500100  395987 main.go:143] libmachine: waiting for IP...
	I1027 23:01:32.501602  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:32.502445  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:32.502468  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:32.502923  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:32.502980  395987 retry.go:31] will retry after 269.74145ms: waiting for domain to come up
	I1027 23:01:32.775737  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:32.776782  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:32.776806  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:32.777323  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:32.777379  395987 retry.go:31] will retry after 382.681015ms: waiting for domain to come up
	I1027 23:01:33.162211  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:33.163026  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:33.163047  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:33.163487  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:33.163535  395987 retry.go:31] will retry after 371.338736ms: waiting for domain to come up
	I1027 23:01:33.536440  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:33.538084  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:33.538107  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:33.538701  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:33.538749  395987 retry.go:31] will retry after 549.14246ms: waiting for domain to come up
	I1027 23:01:34.089480  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:34.090747  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:34.090773  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:34.091328  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:34.091374  395987 retry.go:31] will retry after 699.265461ms: waiting for domain to come up
	I1027 23:01:34.792657  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:34.793716  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:34.793737  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:34.794344  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:34.794387  395987 retry.go:31] will retry after 856.220363ms: waiting for domain to come up
	I1027 23:01:35.652220  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:35.653097  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:35.653118  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:35.653539  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:35.653588  395987 retry.go:31] will retry after 1.04461679s: waiting for domain to come up
	I1027 23:01:34.494824  395132 crio.go:462] duration metric: took 1.996128962s to copy over tarball
	I1027 23:01:34.494943  395132 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 23:01:36.491638  395132 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.996653555s)
	I1027 23:01:36.491673  395132 crio.go:469] duration metric: took 1.996809444s to extract the tarball
	I1027 23:01:36.491683  395132 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 23:01:36.543273  395132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:01:36.601712  395132 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:01:36.601744  395132 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:01:36.601754  395132 kubeadm.go:935] updating node { 192.168.50.5 8443 v1.34.1 crio true true} ...
	I1027 23:01:36.601867  395132 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-561731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:enable-default-cni-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1027 23:01:36.601977  395132 ssh_runner.go:195] Run: crio config
	I1027 23:01:36.652694  395132 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:01:36.652743  395132 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:01:36.652774  395132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.5 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-561731 NodeName:enable-default-cni-561731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:01:36.652959  395132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-561731"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:01:36.653041  395132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:01:36.666478  395132 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:01:36.666566  395132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:01:36.679491  395132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1027 23:01:36.702256  395132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:01:36.724984  395132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1027 23:01:36.748507  395132 ssh_runner.go:195] Run: grep 192.168.50.5	control-plane.minikube.internal$ /etc/hosts
	I1027 23:01:36.753111  395132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:01:36.768848  395132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:01:36.913524  395132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:01:36.934923  395132 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731 for IP: 192.168.50.5
	I1027 23:01:36.934956  395132 certs.go:195] generating shared ca certs ...
	I1027 23:01:36.934976  395132 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:36.935244  395132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 23:01:36.935313  395132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 23:01:36.935326  395132 certs.go:257] generating profile certs ...
	I1027 23:01:36.935402  395132 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.key
	I1027 23:01:36.935417  395132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.crt with IP's: []
	I1027 23:01:37.256399  395132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.crt ...
	I1027 23:01:37.256433  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.crt: {Name:mk4d5520a6fa63fce5b46a188a01c4a32826fcd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.256616  395132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.key ...
	I1027 23:01:37.256638  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/client.key: {Name:mk812fd234faafce59c91a9696669d27dfb5d137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.256725  395132 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key.dd485082
	I1027 23:01:37.256742  395132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt.dd485082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.5]
	I1027 23:01:37.477695  395132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt.dd485082 ...
	I1027 23:01:37.477731  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt.dd485082: {Name:mk994f6a13e4a03e778b2a832102bc16480477d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.477917  395132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key.dd485082 ...
	I1027 23:01:37.477932  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key.dd485082: {Name:mk9fe8bf2a439c851545adc3259791b2e5314558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.478013  395132 certs.go:382] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt.dd485082 -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt
	I1027 23:01:37.478088  395132 certs.go:386] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key.dd485082 -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key
	I1027 23:01:37.478150  395132 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.key
	I1027 23:01:37.478171  395132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.crt with IP's: []
	I1027 23:01:33.636496  387237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.503607147s
	I1027 23:01:33.646083  387237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:01:33.646206  387237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	I1027 23:01:33.646317  387237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:01:33.646419  387237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:01:36.221369  387237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.57478969s
	I1027 23:01:36.565827  387237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.92004853s
	I1027 23:01:37.811333  395132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.crt ...
	I1027 23:01:37.811372  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.crt: {Name:mk2607e221ec9a00499496bc8fdcadb0da11e6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.811578  395132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.key ...
	I1027 23:01:37.811604  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.key: {Name:mkd15fae15b5b0062b81ab0568798deb23e51a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:37.811791  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 23:01:37.811828  395132 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 23:01:37.811838  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:01:37.811862  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:01:37.811898  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:01:37.811921  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 23:01:37.811960  395132 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 23:01:37.812539  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:01:37.847595  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:01:37.883552  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:01:37.917023  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 23:01:37.950438  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1027 23:01:37.983533  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:01:38.023861  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:01:38.064114  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/enable-default-cni-561731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:01:38.102318  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 23:01:38.141047  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:01:38.176615  395132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 23:01:38.210807  395132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:01:38.234011  395132 ssh_runner.go:195] Run: openssl version
	I1027 23:01:38.240854  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 23:01:38.254644  395132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 23:01:38.262032  395132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 23:01:38.262117  395132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 23:01:38.270177  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 23:01:38.284764  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 23:01:38.299597  395132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 23:01:38.306010  395132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 23:01:38.306084  395132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 23:01:38.314307  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:01:38.329934  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:01:38.344551  395132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:38.350643  395132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:38.350713  395132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:38.359185  395132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:01:38.374442  395132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:01:38.379658  395132 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:01:38.379722  395132 kubeadm.go:401] StartCluster: {Name:enable-default-cni-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:enable-default-cni-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:01:38.379805  395132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:01:38.379882  395132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:01:38.445440  395132 cri.go:89] found id: ""
	I1027 23:01:38.445517  395132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:01:38.472559  395132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:01:38.492839  395132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:01:38.511086  395132 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:01:38.511110  395132 kubeadm.go:158] found existing configuration files:
	
	I1027 23:01:38.511157  395132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:01:38.526403  395132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:01:38.526477  395132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:01:38.542167  395132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:01:38.555314  395132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:01:38.555390  395132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:01:38.568989  395132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:01:38.581519  395132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:01:38.581607  395132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:01:38.595738  395132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:01:38.608583  395132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:01:38.608665  395132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:01:38.622119  395132 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 23:01:38.685010  395132 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:01:38.685133  395132 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:01:38.805021  395132 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:01:38.805195  395132 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:01:38.805351  395132 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:01:38.817514  395132 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:01:34.891268  396818 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:01:34.891320  396818 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 23:01:34.891332  396818 cache.go:59] Caching tarball of preloaded images
	I1027 23:01:34.891473  396818 preload.go:233] Found /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 23:01:34.891489  396818 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:01:34.891633  396818 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/config.json ...
	I1027 23:01:34.891680  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/config.json: {Name:mk18c29977c3a958c1047e7e23129e0e6658c0f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:34.891938  396818 start.go:360] acquireMachinesLock for bridge-561731: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 23:01:36.700140  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:36.700785  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:36.700802  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:36.701240  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:36.701282  395987 retry.go:31] will retry after 950.895236ms: waiting for domain to come up
	I1027 23:01:37.653421  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:37.654239  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:37.654260  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:37.654698  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:37.654746  395987 retry.go:31] will retry after 1.851905617s: waiting for domain to come up
	I1027 23:01:39.508874  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:39.509712  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:39.509733  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:39.510124  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:39.510166  395987 retry.go:31] will retry after 1.546143213s: waiting for domain to come up
	I1027 23:01:38.991707  395132 out.go:252]   - Generating certificates and keys ...
	I1027 23:01:38.991870  395132 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:01:38.991984  395132 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:01:38.992115  395132 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:01:38.992218  395132 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:01:39.222419  395132 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:01:39.530786  395132 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:01:39.619122  395132 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:01:39.619468  395132 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-561731 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I1027 23:01:39.826570  395132 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:01:39.826741  395132 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-561731 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I1027 23:01:40.081088  395132 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:01:40.255441  395132 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:01:40.414364  395132 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:01:40.414445  395132 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:01:40.581652  395132 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:01:40.936081  395132 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:01:41.411511  395132 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:01:41.623129  395132 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:01:41.890072  395132 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:01:41.891904  395132 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:01:41.895032  395132 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:01:41.949156  395132 out.go:252]   - Booting up control plane ...
	I1027 23:01:41.949311  395132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:01:41.949416  395132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:01:41.949511  395132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:01:41.949667  395132 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:01:41.949801  395132 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:01:41.949967  395132 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:01:41.950123  395132 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:01:41.950210  395132 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:01:42.106611  395132 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:01:42.106810  395132 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:01:42.608010  395132 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.959231ms
	I1027 23:01:42.612646  395132 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:01:42.612786  395132 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.5:8443/livez
	I1027 23:01:42.612939  395132 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:01:42.613058  395132 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:01:41.058380  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:41.059264  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:41.059281  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:41.059752  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:41.059805  395987 retry.go:31] will retry after 1.746374304s: waiting for domain to come up
	I1027 23:01:42.808085  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:42.808967  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:42.808985  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:42.809435  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:42.809477  395987 retry.go:31] will retry after 2.490470822s: waiting for domain to come up
	I1027 23:01:45.303141  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:45.303942  395987 main.go:143] libmachine: no network interface addresses found for domain flannel-561731 (source=lease)
	I1027 23:01:45.303963  395987 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:45.304364  395987 main.go:143] libmachine: unable to find current IP address of domain flannel-561731 in network mk-flannel-561731 (interfaces detected: [])
	I1027 23:01:45.304403  395987 retry.go:31] will retry after 3.732477227s: waiting for domain to come up
	I1027 23:01:45.342265  395132 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.73180518s
	I1027 23:01:46.595000  395132 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.986039161s
	I1027 23:01:48.611376  395132 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003712656s
	I1027 23:01:48.630779  395132 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:01:48.647610  395132 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:01:48.669150  395132 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:01:48.669347  395132 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-561731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:01:48.691753  395132 kubeadm.go:319] [bootstrap-token] Using token: r81u79.es8ng4culpythwt3
	I1027 23:01:48.693153  395132 out.go:252]   - Configuring RBAC rules ...
	I1027 23:01:48.693302  395132 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:01:48.710131  395132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:01:48.722531  395132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:01:48.728462  395132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:01:48.740162  395132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:01:48.745466  395132 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:01:49.021347  395132 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:01:49.497077  395132 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:01:50.019973  395132 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:01:50.021447  395132 kubeadm.go:319] 
	I1027 23:01:50.021532  395132 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:01:50.021542  395132 kubeadm.go:319] 
	I1027 23:01:50.021626  395132 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:01:50.021634  395132 kubeadm.go:319] 
	I1027 23:01:50.021660  395132 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:01:50.021773  395132 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:01:50.021866  395132 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:01:50.021876  395132 kubeadm.go:319] 
	I1027 23:01:50.021973  395132 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:01:50.021988  395132 kubeadm.go:319] 
	I1027 23:01:50.022065  395132 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:01:50.022074  395132 kubeadm.go:319] 
	I1027 23:01:50.022155  395132 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:01:50.022268  395132 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:01:50.022377  395132 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:01:50.022386  395132 kubeadm.go:319] 
	I1027 23:01:50.022509  395132 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:01:50.022623  395132 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:01:50.022633  395132 kubeadm.go:319] 
	I1027 23:01:50.022763  395132 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token r81u79.es8ng4culpythwt3 \
	I1027 23:01:50.022949  395132 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c \
	I1027 23:01:50.023002  395132 kubeadm.go:319] 	--control-plane 
	I1027 23:01:50.023021  395132 kubeadm.go:319] 
	I1027 23:01:50.023135  395132 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:01:50.023153  395132 kubeadm.go:319] 
	I1027 23:01:50.023268  395132 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token r81u79.es8ng4culpythwt3 \
	I1027 23:01:50.023397  395132 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c 
	I1027 23:01:50.026751  395132 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:01:50.026797  395132 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:01:50.029567  395132 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 23:01:50.796266  396818 start.go:364] duration metric: took 15.904295942s to acquireMachinesLock for "bridge-561731"
	I1027 23:01:50.796353  396818 start.go:93] Provisioning new machine with config: &{Name:bridge-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:01:50.796480  396818 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 23:01:49.039117  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.039811  395987 main.go:143] libmachine: domain flannel-561731 has current primary IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.039830  395987 main.go:143] libmachine: found domain IP: 192.168.72.89
	I1027 23:01:49.039839  395987 main.go:143] libmachine: reserving static IP address...
	I1027 23:01:49.040435  395987 main.go:143] libmachine: unable to find host DHCP lease matching {name: "flannel-561731", mac: "52:54:00:18:2f:14", ip: "192.168.72.89"} in network mk-flannel-561731
	I1027 23:01:49.303157  395987 main.go:143] libmachine: reserved static IP address 192.168.72.89 for domain flannel-561731
	I1027 23:01:49.303206  395987 main.go:143] libmachine: waiting for SSH...
	I1027 23:01:49.303216  395987 main.go:143] libmachine: Getting to WaitForSSH function...
	I1027 23:01:49.307148  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.307655  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.307680  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.307923  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:49.308226  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:49.308239  395987 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1027 23:01:49.418964  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:01:49.419377  395987 main.go:143] libmachine: domain creation complete
	I1027 23:01:49.421303  395987 machine.go:94] provisionDockerMachine start ...
	I1027 23:01:49.424877  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.425472  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.425511  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.425786  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:49.426104  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:49.426128  395987 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:01:49.539651  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 23:01:49.539702  395987 buildroot.go:166] provisioning hostname "flannel-561731"
	I1027 23:01:49.543033  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.543511  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.543536  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.543754  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:49.544041  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:49.544057  395987 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-561731 && echo "flannel-561731" | sudo tee /etc/hostname
	I1027 23:01:49.682145  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-561731
	
	I1027 23:01:49.686308  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.686779  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.686841  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.687075  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:49.687322  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:49.687339  395987 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-561731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-561731/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-561731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:01:49.809273  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:01:49.809314  395987 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 23:01:49.809360  395987 buildroot.go:174] setting up certificates
	I1027 23:01:49.809373  395987 provision.go:84] configureAuth start
	I1027 23:01:49.813060  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.813581  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.813637  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.816783  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.817299  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:49.817337  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:49.817518  395987 provision.go:143] copyHostCerts
	I1027 23:01:49.817587  395987 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem, removing ...
	I1027 23:01:49.817618  395987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem
	I1027 23:01:49.817722  395987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 23:01:49.817850  395987 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem, removing ...
	I1027 23:01:49.817864  395987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem
	I1027 23:01:49.817947  395987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 23:01:49.818036  395987 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem, removing ...
	I1027 23:01:49.818046  395987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem
	I1027 23:01:49.818089  395987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 23:01:49.818160  395987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.flannel-561731 san=[127.0.0.1 192.168.72.89 flannel-561731 localhost minikube]
	I1027 23:01:50.006688  395987 provision.go:177] copyRemoteCerts
	I1027 23:01:50.006749  395987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:01:50.009349  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.009730  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.009762  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.009923  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:01:50.097361  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:01:50.134184  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 23:01:50.174521  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 23:01:50.221730  395987 provision.go:87] duration metric: took 412.340619ms to configureAuth
	I1027 23:01:50.221772  395987 buildroot.go:189] setting minikube options for container-runtime
	I1027 23:01:50.222028  395987 config.go:182] Loaded profile config "flannel-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:01:50.225469  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.226000  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.226035  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.226258  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:50.226486  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:50.226507  395987 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:01:50.517772  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:01:50.517820  395987 machine.go:97] duration metric: took 1.096494s to provisionDockerMachine
	I1027 23:01:50.517835  395987 client.go:176] duration metric: took 20.245455096s to LocalClient.Create
	I1027 23:01:50.517870  395987 start.go:167] duration metric: took 20.245534108s to libmachine.API.Create "flannel-561731"
	I1027 23:01:50.517881  395987 start.go:293] postStartSetup for "flannel-561731" (driver="kvm2")
	I1027 23:01:50.517926  395987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:01:50.518021  395987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:01:50.521387  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.521989  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.522029  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.522292  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:01:50.612255  395987 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:01:50.618033  395987 info.go:137] Remote host: Buildroot 2025.02
	I1027 23:01:50.618077  395987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 23:01:50.618166  395987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 23:01:50.618302  395987 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem -> 3566212.pem in /etc/ssl/certs
	I1027 23:01:50.618454  395987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:01:50.632963  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 23:01:50.668384  395987 start.go:296] duration metric: took 150.456431ms for postStartSetup
	I1027 23:01:50.672457  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.672997  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.673039  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.673415  395987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/config.json ...
	I1027 23:01:50.673745  395987 start.go:128] duration metric: took 20.405406614s to createHost
	I1027 23:01:50.677290  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.677825  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.677870  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.678127  395987 main.go:143] libmachine: Using SSH client type: native
	I1027 23:01:50.678508  395987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.89 22 <nil> <nil>}
	I1027 23:01:50.678538  395987 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 23:01:50.796101  395987 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761606110.752415712
	
	I1027 23:01:50.796136  395987 fix.go:217] guest clock: 1761606110.752415712
	I1027 23:01:50.796146  395987 fix.go:230] Guest: 2025-10-27 23:01:50.752415712 +0000 UTC Remote: 2025-10-27 23:01:50.673764737 +0000 UTC m=+29.707278665 (delta=78.650975ms)
	I1027 23:01:50.796165  395987 fix.go:201] guest clock delta is within tolerance: 78.650975ms
	I1027 23:01:50.796170  395987 start.go:83] releasing machines lock for "flannel-561731", held for 20.5280334s
	I1027 23:01:50.799756  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.800282  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.800323  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.800948  395987 ssh_runner.go:195] Run: cat /version.json
	I1027 23:01:50.800990  395987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:01:50.804107  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.804339  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.804561  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.804602  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.804775  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:50.804792  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:01:50.804810  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:50.805081  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:01:50.902246  395987 ssh_runner.go:195] Run: systemctl --version
	I1027 23:01:50.939619  395987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:01:51.103526  395987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:01:51.111902  395987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:01:51.111976  395987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:01:51.138664  395987 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 23:01:51.138693  395987 start.go:496] detecting cgroup driver to use...
	I1027 23:01:51.138759  395987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:01:51.166940  395987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:01:51.190858  395987 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:01:51.190946  395987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:01:51.216138  395987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:01:51.235129  395987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:01:51.401470  395987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:01:51.614678  395987 docker.go:234] disabling docker service ...
	I1027 23:01:51.614767  395987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:01:51.637697  395987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:01:51.657616  395987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:01:51.827218  395987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:01:51.981168  395987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:01:51.999205  395987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:01:52.024717  395987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:01:52.024798  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.038819  395987 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:01:52.038922  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.053710  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.068213  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.082719  395987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:01:52.098213  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.112210  395987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.134757  395987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:01:52.150184  395987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:01:52.166086  395987 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 23:01:52.166154  395987 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 23:01:52.193259  395987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:01:52.208675  395987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:01:52.376315  395987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:01:52.509979  395987 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:01:52.510049  395987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:01:52.516357  395987 start.go:564] Will wait 60s for crictl version
	I1027 23:01:52.516413  395987 ssh_runner.go:195] Run: which crictl
	I1027 23:01:52.521360  395987 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 23:01:52.576557  395987 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 23:01:52.576648  395987 ssh_runner.go:195] Run: crio --version
	I1027 23:01:52.617266  395987 ssh_runner.go:195] Run: crio --version
	I1027 23:01:52.668181  395987 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 23:01:50.030797  395132 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 23:01:50.052417  395132 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 23:01:50.081735  395132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:01:50.081825  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:50.081850  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-561731 minikube.k8s.io/updated_at=2025_10_27T23_01_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=enable-default-cni-561731 minikube.k8s.io/primary=true
	I1027 23:01:50.321932  395132 ops.go:34] apiserver oom_adj: -16
	I1027 23:01:50.321982  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:50.822781  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:51.322823  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:51.822090  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:52.323029  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:52.822141  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:53.322053  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:53.822814  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:54.322901  395132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:01:54.475177  395132 kubeadm.go:1114] duration metric: took 4.393423897s to wait for elevateKubeSystemPrivileges
	I1027 23:01:54.475212  395132 kubeadm.go:403] duration metric: took 16.095493942s to StartCluster
	I1027 23:01:54.475238  395132 settings.go:142] acquiring lock: {Name:mk9b0cd8ae1e83c76c2473e7845967d905910c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:54.475320  395132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 23:01:54.476756  395132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:54.477091  395132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:01:54.477106  395132 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:01:54.477442  395132 config.go:182] Loaded profile config "enable-default-cni-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:01:54.477498  395132 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:01:54.477680  395132 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-561731"
	I1027 23:01:54.477718  395132 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-561731"
	I1027 23:01:54.477758  395132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-561731"
	I1027 23:01:54.477725  395132 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-561731"
	I1027 23:01:54.477808  395132 host.go:66] Checking if "enable-default-cni-561731" exists ...
	I1027 23:01:54.479184  395132 out.go:179] * Verifying Kubernetes components...
	I1027 23:01:54.480470  395132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:01:54.483142  395132 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-561731"
	I1027 23:01:54.483189  395132 host.go:66] Checking if "enable-default-cni-561731" exists ...
	I1027 23:01:54.485446  395132 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:01:54.485470  395132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:01:54.486138  395132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:01:50.798876  396818 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1027 23:01:50.799111  396818 start.go:159] libmachine.API.Create for "bridge-561731" (driver="kvm2")
	I1027 23:01:50.799155  396818 client.go:173] LocalClient.Create starting
	I1027 23:01:50.799281  396818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem
	I1027 23:01:50.799365  396818 main.go:143] libmachine: Decoding PEM data...
	I1027 23:01:50.799392  396818 main.go:143] libmachine: Parsing certificate...
	I1027 23:01:50.799471  396818 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem
	I1027 23:01:50.799499  396818 main.go:143] libmachine: Decoding PEM data...
	I1027 23:01:50.799519  396818 main.go:143] libmachine: Parsing certificate...
	I1027 23:01:50.799931  396818 main.go:143] libmachine: creating domain...
	I1027 23:01:50.799946  396818 main.go:143] libmachine: creating network...
	I1027 23:01:50.801970  396818 main.go:143] libmachine: found existing default network
	I1027 23:01:50.802248  396818 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 23:01:50.803811  396818 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:69:f4:72} reservation:<nil>}
	I1027 23:01:50.804731  396818 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:33:cd} reservation:<nil>}
	I1027 23:01:50.806057  396818 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:95:c6} reservation:<nil>}
	I1027 23:01:50.807087  396818 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:03:be} reservation:<nil>}
	I1027 23:01:50.808386  396818 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d8b4e0}
	I1027 23:01:50.808490  396818 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-bridge-561731</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 23:01:50.814266  396818 main.go:143] libmachine: creating private network mk-bridge-561731 192.168.83.0/24...
	I1027 23:01:50.914444  396818 main.go:143] libmachine: private network mk-bridge-561731 192.168.83.0/24 created
	I1027 23:01:50.914790  396818 main.go:143] libmachine: <network>
	  <name>mk-bridge-561731</name>
	  <uuid>c3819718-7d65-4e77-8b3b-5b33674f2987</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:01:66:d8'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 23:01:50.914842  396818 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731 ...
	I1027 23:01:50.914912  396818 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21790-352679/.minikube/cache/iso/amd64/minikube-v1.37.0-1761414747-21797-amd64.iso
	I1027 23:01:50.914939  396818 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 23:01:50.915043  396818 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21790-352679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21790-352679/.minikube/cache/iso/amd64/minikube-v1.37.0-1761414747-21797-amd64.iso...
	I1027 23:01:51.192736  396818 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa...
	I1027 23:01:51.369589  396818 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/bridge-561731.rawdisk...
	I1027 23:01:51.369646  396818 main.go:143] libmachine: Writing magic tar header
	I1027 23:01:51.369666  396818 main.go:143] libmachine: Writing SSH key tar header
	I1027 23:01:51.369772  396818 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731 ...
	I1027 23:01:51.369857  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731
	I1027 23:01:51.369884  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731 (perms=drwx------)
	I1027 23:01:51.369915  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube/machines
	I1027 23:01:51.369932  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube/machines (perms=drwxr-xr-x)
	I1027 23:01:51.369947  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 23:01:51.369965  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679/.minikube (perms=drwxr-xr-x)
	I1027 23:01:51.369975  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21790-352679
	I1027 23:01:51.369990  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21790-352679 (perms=drwxrwxr-x)
	I1027 23:01:51.370006  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 23:01:51.370022  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 23:01:51.370034  396818 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1027 23:01:51.370050  396818 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 23:01:51.370061  396818 main.go:143] libmachine: checking permissions on dir: /home
	I1027 23:01:51.370068  396818 main.go:143] libmachine: skipping /home - not owner
	I1027 23:01:51.370073  396818 main.go:143] libmachine: defining domain...
	I1027 23:01:51.371715  396818 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>bridge-561731</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/bridge-561731.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-bridge-561731'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 23:01:51.378593  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:04:8a:1c in network default
	I1027 23:01:51.379260  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:51.379280  396818 main.go:143] libmachine: starting domain...
	I1027 23:01:51.379285  396818 main.go:143] libmachine: ensuring networks are active...
	I1027 23:01:51.380204  396818 main.go:143] libmachine: Ensuring network default is active
	I1027 23:01:51.380721  396818 main.go:143] libmachine: Ensuring network mk-bridge-561731 is active
	I1027 23:01:51.381447  396818 main.go:143] libmachine: getting domain XML...
	I1027 23:01:51.382974  396818 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>bridge-561731</name>
	  <uuid>9f63af3b-9aaa-409d-80f7-3793474349c2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/bridge-561731.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4d:e7:d1'/>
	      <source network='mk-bridge-561731'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:04:8a:1c'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 23:01:52.976012  396818 main.go:143] libmachine: waiting for domain to start...
	I1027 23:01:52.977485  396818 main.go:143] libmachine: domain is now running
	I1027 23:01:52.977503  396818 main.go:143] libmachine: waiting for IP...
	I1027 23:01:52.978334  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:52.978977  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:52.978993  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:52.979373  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:52.979425  396818 retry.go:31] will retry after 274.574245ms: waiting for domain to come up
	I1027 23:01:53.256075  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:53.256780  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:53.256804  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:53.257299  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:53.257350  396818 retry.go:31] will retry after 344.03385ms: waiting for domain to come up
	I1027 23:01:53.603161  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:53.604189  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:53.604211  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:53.604777  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:53.604822  396818 retry.go:31] will retry after 368.246522ms: waiting for domain to come up
	I1027 23:01:53.974643  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:53.975570  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:53.975599  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:53.976148  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:53.976201  396818 retry.go:31] will retry after 526.458415ms: waiting for domain to come up
	I1027 23:01:54.504065  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:54.504956  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:54.504978  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:54.505459  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:54.505499  396818 retry.go:31] will retry after 662.299092ms: waiting for domain to come up
	I1027 23:01:52.674009  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:52.674681  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:01:52.674713  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:01:52.675034  395987 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1027 23:01:52.680554  395987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:01:52.702710  395987 kubeadm.go:884] updating cluster {Name:flannel-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:flannel-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:01:52.702880  395987 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:01:52.702994  395987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:01:52.749411  395987 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 23:01:52.749495  395987 ssh_runner.go:195] Run: which lz4
	I1027 23:01:52.754617  395987 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 23:01:52.761217  395987 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 23:01:52.761256  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 23:01:54.704904  395987 crio.go:462] duration metric: took 1.950324639s to copy over tarball
	I1027 23:01:54.705014  395987 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 23:01:54.487423  395132 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:01:54.487445  395132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:01:54.490340  395132 main.go:143] libmachine: domain enable-default-cni-561731 has defined MAC address 52:54:00:50:f9:6a in network mk-enable-default-cni-561731
	I1027 23:01:54.491335  395132 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:f9:6a", ip: ""} in network mk-enable-default-cni-561731: {Iface:virbr2 ExpiryTime:2025-10-28 00:01:26 +0000 UTC Type:0 Mac:52:54:00:50:f9:6a Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:enable-default-cni-561731 Clientid:01:52:54:00:50:f9:6a}
	I1027 23:01:54.491381  395132 main.go:143] libmachine: domain enable-default-cni-561731 has defined IP address 192.168.50.5 and MAC address 52:54:00:50:f9:6a in network mk-enable-default-cni-561731
	I1027 23:01:54.491606  395132 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/enable-default-cni-561731/id_rsa Username:docker}
	I1027 23:01:54.493311  395132 main.go:143] libmachine: domain enable-default-cni-561731 has defined MAC address 52:54:00:50:f9:6a in network mk-enable-default-cni-561731
	I1027 23:01:54.493977  395132 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:f9:6a", ip: ""} in network mk-enable-default-cni-561731: {Iface:virbr2 ExpiryTime:2025-10-28 00:01:26 +0000 UTC Type:0 Mac:52:54:00:50:f9:6a Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:enable-default-cni-561731 Clientid:01:52:54:00:50:f9:6a}
	I1027 23:01:54.494017  395132 main.go:143] libmachine: domain enable-default-cni-561731 has defined IP address 192.168.50.5 and MAC address 52:54:00:50:f9:6a in network mk-enable-default-cni-561731
	I1027 23:01:54.494445  395132 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/enable-default-cni-561731/id_rsa Username:docker}
	I1027 23:01:54.856412  395132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:01:54.941805  395132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:01:55.148775  395132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:01:55.163003  395132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:01:55.576799  395132 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1027 23:01:55.579559  395132 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-561731" to be "Ready" ...
	I1027 23:01:55.612526  395132 node_ready.go:49] node "enable-default-cni-561731" is "Ready"
	I1027 23:01:55.612578  395132 node_ready.go:38] duration metric: took 32.979876ms for node "enable-default-cni-561731" to be "Ready" ...
	I1027 23:01:55.612610  395132 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:01:55.612685  395132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:01:56.058367  395132 api_server.go:72] duration metric: took 1.581221427s to wait for apiserver process to appear ...
	I1027 23:01:56.058393  395132 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:01:56.058414  395132 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1027 23:01:56.072460  395132 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I1027 23:01:56.074128  395132 api_server.go:141] control plane version: v1.34.1
	I1027 23:01:56.074161  395132 api_server.go:131] duration metric: took 15.76015ms to wait for apiserver health ...
	I1027 23:01:56.074172  395132 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:01:56.082438  395132 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:01:56.085071  395132 system_pods.go:59] 8 kube-system pods found
	I1027 23:01:56.085131  395132 system_pods.go:61] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.085143  395132 system_pods.go:61] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.085152  395132 system_pods.go:61] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:56.085165  395132 system_pods.go:61] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:56.085176  395132 system_pods.go:61] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:56.085187  395132 system_pods.go:61] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:56.085195  395132 system_pods.go:61] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:56.085199  395132 system_pods.go:61] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending
	I1027 23:01:56.085208  395132 system_pods.go:74] duration metric: took 11.028112ms to wait for pod list to return data ...
	I1027 23:01:56.085217  395132 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:01:56.085801  395132 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-561731" context rescaled to 1 replicas
	I1027 23:01:56.085929  395132 addons.go:514] duration metric: took 1.60842207s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:01:56.093093  395132 default_sa.go:45] found service account: "default"
	I1027 23:01:56.093131  395132 default_sa.go:55] duration metric: took 7.904814ms for default service account to be created ...
	I1027 23:01:56.093175  395132 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:01:56.098659  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:56.098704  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.098715  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.098722  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:56.098732  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:56.098740  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:56.098749  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:56.098758  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:56.098766  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:56.098805  395132 retry.go:31] will retry after 266.718149ms: missing components: kube-dns, kube-proxy
	I1027 23:01:56.373944  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:56.373988  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.373999  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.374006  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:56.374018  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:56.374034  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:56.374061  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:56.374068  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:56.374075  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:56.374097  395132 retry.go:31] will retry after 378.629682ms: missing components: kube-dns, kube-proxy
	I1027 23:01:56.764550  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:56.764597  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.764634  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:56.764644  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:56.764687  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:56.764704  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:56.764721  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:56.764729  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:56.764739  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:56.764762  395132 retry.go:31] will retry after 400.458737ms: missing components: kube-dns, kube-proxy
	I1027 23:01:57.171037  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:57.171085  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:57.171096  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:57.171111  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:57.171121  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:57.171131  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:57.171142  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:57.171149  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:57.171157  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:57.171180  395132 retry.go:31] will retry after 381.689159ms: missing components: kube-dns, kube-proxy
	I1027 23:01:57.558439  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:57.558482  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:57.558492  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:57.558508  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:57.558519  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:57.558543  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:57.558554  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:01:57.558562  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:57.558570  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:57.558590  395132 retry.go:31] will retry after 647.372248ms: missing components: kube-dns, kube-proxy
	I1027 23:01:55.169913  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:55.170704  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:55.170726  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:55.171295  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:55.171342  396818 retry.go:31] will retry after 876.541661ms: waiting for domain to come up
	I1027 23:01:56.049915  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:56.050975  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:56.051001  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:56.051507  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:56.051554  396818 retry.go:31] will retry after 893.305433ms: waiting for domain to come up
	I1027 23:01:56.946923  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:56.947817  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:56.947854  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:56.948399  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:56.948453  396818 retry.go:31] will retry after 1.381314958s: waiting for domain to come up
	I1027 23:01:58.331129  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:58.331910  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:58.331928  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:58.332300  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:58.332354  396818 retry.go:31] will retry after 1.329071274s: waiting for domain to come up
	I1027 23:01:59.664423  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:01:59.665298  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:01:59.665326  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:01:59.665696  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:01:59.665751  396818 retry.go:31] will retry after 2.126530116s: waiting for domain to come up
	I1027 23:01:56.859673  395987 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.154612435s)
	I1027 23:01:56.859711  395987 crio.go:469] duration metric: took 2.154768523s to extract the tarball
	I1027 23:01:56.859723  395987 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 23:01:56.917110  395987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:01:56.974181  395987 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:01:56.974212  395987 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:01:56.974223  395987 kubeadm.go:935] updating node { 192.168.72.89 8443 v1.34.1 crio true true} ...
	I1027 23:01:56.974333  395987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-561731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:flannel-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1027 23:01:56.974428  395987 ssh_runner.go:195] Run: crio config
	I1027 23:01:57.031851  395987 cni.go:84] Creating CNI manager for "flannel"
	I1027 23:01:57.031911  395987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:01:57.031947  395987 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.89 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-561731 NodeName:flannel-561731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:01:57.032157  395987 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-561731"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:01:57.032241  395987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:01:57.045196  395987 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:01:57.045290  395987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:01:57.058496  395987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 23:01:57.087335  395987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:01:57.109583  395987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 23:01:57.131805  395987 ssh_runner.go:195] Run: grep 192.168.72.89	control-plane.minikube.internal$ /etc/hosts
	I1027 23:01:57.136490  395987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:01:57.151744  395987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:01:57.323050  395987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:01:57.350946  395987 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731 for IP: 192.168.72.89
	I1027 23:01:57.350979  395987 certs.go:195] generating shared ca certs ...
	I1027 23:01:57.351003  395987 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:57.351216  395987 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 23:01:57.351293  395987 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 23:01:57.351314  395987 certs.go:257] generating profile certs ...
	I1027 23:01:57.351393  395987 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.key
	I1027 23:01:57.351411  395987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.crt with IP's: []
	I1027 23:01:57.560762  395987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.crt ...
	I1027 23:01:57.560789  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.crt: {Name:mk97574bfe2277118be817171cb91276e7589f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:57.561007  395987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.key ...
	I1027 23:01:57.561020  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/client.key: {Name:mk92388da865b68baed0a8c38332d8e32e06a13b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:57.561099  395987 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key.87e9cb6d
	I1027 23:01:57.561115  395987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt.87e9cb6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.89]
	I1027 23:01:57.721934  395987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt.87e9cb6d ...
	I1027 23:01:57.721965  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt.87e9cb6d: {Name:mk1432cccc36b441aa9a5ac4ab6257d6ab6ae8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:57.770771  395987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key.87e9cb6d ...
	I1027 23:01:57.770824  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key.87e9cb6d: {Name:mke3f49ee921c5046973e750337cd57746888e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:57.771040  395987 certs.go:382] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt.87e9cb6d -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt
	I1027 23:01:57.771169  395987 certs.go:386] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key.87e9cb6d -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key
	I1027 23:01:57.771269  395987 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.key
	I1027 23:01:57.771298  395987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.crt with IP's: []
	I1027 23:01:58.176994  395987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.crt ...
	I1027 23:01:58.177034  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.crt: {Name:mke6599226e731d6b4ff5fd741edece4e5f61630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:58.177222  395987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.key ...
	I1027 23:01:58.177236  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.key: {Name:mk1ec906f0e654cdb15f5c3aac830b627d559dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:01:58.177424  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 23:01:58.177465  395987 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 23:01:58.177474  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:01:58.177495  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:01:58.177516  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:01:58.177540  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 23:01:58.177580  395987 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 23:01:58.178177  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:01:58.232672  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:01:58.275449  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:01:58.312962  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 23:01:58.347518  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 23:01:58.381064  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:01:58.423955  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:01:58.534544  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/flannel-561731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:01:58.572611  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 23:01:58.611392  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 23:01:58.646414  395987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:01:58.680549  395987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:01:58.704706  395987 ssh_runner.go:195] Run: openssl version
	I1027 23:01:58.712277  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 23:01:58.727877  395987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 23:01:58.734223  395987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 23:01:58.734313  395987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 23:01:58.743113  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:01:58.758374  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:01:58.774339  395987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:58.781139  395987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:58.781229  395987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:01:58.789648  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:01:58.805581  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 23:01:58.819504  395987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 23:01:58.826209  395987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 23:01:58.826307  395987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 23:01:58.834593  395987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 23:01:58.851767  395987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:01:58.857496  395987 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:01:58.857582  395987 kubeadm.go:401] StartCluster: {Name:flannel-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:flannel-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:01:58.857690  395987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:01:58.857790  395987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:01:58.910628  395987 cri.go:89] found id: ""
	I1027 23:01:58.910717  395987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:01:58.931044  395987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:01:58.948637  395987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:01:58.971671  395987 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:01:58.971699  395987 kubeadm.go:158] found existing configuration files:
	
	I1027 23:01:58.971764  395987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:01:58.991131  395987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:01:58.991211  395987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:01:59.009991  395987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:01:59.023697  395987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:01:59.023777  395987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:01:59.036373  395987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:01:59.048368  395987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:01:59.048468  395987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:01:59.064794  395987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:01:59.081041  395987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:01:59.081105  395987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:01:59.094735  395987 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 23:01:59.296570  395987 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:01:58.543589  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:58.543636  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Failed / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:58.543647  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:58.543659  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:58.543686  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:58.543699  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:58.543706  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Running
	I1027 23:01:58.543714  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:58.543730  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:58.543756  395132 retry.go:31] will retry after 594.50868ms: missing components: kube-dns
	I1027 23:01:59.407529  395132 system_pods.go:86] 8 kube-system pods found
	I1027 23:01:59.407582  395132 system_pods.go:89] "coredns-66bc5c9577-j48gl" [b339dce6-70fe-4476-8353-71e92c18d909] Failed / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:59.407601  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:01:59.407609  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:01:59.407620  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:01:59.407631  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:01:59.407640  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Running
	I1027 23:01:59.407648  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:01:59.407664  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:01:59.407699  395132 retry.go:31] will retry after 1.186990959s: missing components: kube-dns
	I1027 23:02:00.605267  395132 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:00.605322  395132 system_pods.go:89] "coredns-66bc5c9577-wkkjh" [e8d85595-4e02-4738-ac8c-a7c66ed53418] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:00.605333  395132 system_pods.go:89] "etcd-enable-default-cni-561731" [4a3f2814-e2c6-48b8-bfc1-53f035eff248] Running
	I1027 23:02:00.605368  395132 system_pods.go:89] "kube-apiserver-enable-default-cni-561731" [4c134a9f-3780-4e4b-9c40-696f80dbde75] Running
	I1027 23:02:00.605380  395132 system_pods.go:89] "kube-controller-manager-enable-default-cni-561731" [4207f401-fdc9-4a3b-90af-dfc93e8e210e] Running
	I1027 23:02:00.605385  395132 system_pods.go:89] "kube-proxy-rrqzw" [e365f0c3-2826-4118-aadb-b368c671d1c6] Running
	I1027 23:02:00.605395  395132 system_pods.go:89] "kube-scheduler-enable-default-cni-561731" [4c5711ab-bd39-4b3f-a9c7-f0096a5fa24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:02:00.605400  395132 system_pods.go:89] "storage-provisioner" [058cc032-92e2-419d-a4f2-e32c516b0769] Running
	I1027 23:02:00.605408  395132 system_pods.go:126] duration metric: took 4.512219818s to wait for k8s-apps to be running ...
	I1027 23:02:00.605417  395132 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:02:00.605466  395132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:02:00.630297  395132 system_svc.go:56] duration metric: took 24.866548ms WaitForService to wait for kubelet
	I1027 23:02:00.630338  395132 kubeadm.go:587] duration metric: took 6.153196261s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:02:00.630364  395132 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:02:00.634641  395132 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 23:02:00.634685  395132 node_conditions.go:123] node cpu capacity is 2
	I1027 23:02:00.634702  395132 node_conditions.go:105] duration metric: took 4.331786ms to run NodePressure ...
	I1027 23:02:00.634720  395132 start.go:242] waiting for startup goroutines ...
	I1027 23:02:00.634731  395132 start.go:247] waiting for cluster config update ...
	I1027 23:02:00.634750  395132 start.go:256] writing updated cluster config ...
	I1027 23:02:00.635114  395132 ssh_runner.go:195] Run: rm -f paused
	I1027 23:02:00.640857  395132 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:00.646794  395132 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkkjh" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:02:02.663737  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:01.795705  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:01.796760  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:02:01.796787  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:02:01.797338  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:02:01.797390  396818 retry.go:31] will retry after 1.932558872s: waiting for domain to come up
	I1027 23:02:03.732281  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:03.733407  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:02:03.733442  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:02:03.734046  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:02:03.734097  396818 retry.go:31] will retry after 3.24647709s: waiting for domain to come up
	W1027 23:02:05.153755  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:07.154072  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:06.983633  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:06.984419  396818 main.go:143] libmachine: no network interface addresses found for domain bridge-561731 (source=lease)
	I1027 23:02:06.984442  396818 main.go:143] libmachine: trying to list again with source=arp
	I1027 23:02:06.984846  396818 main.go:143] libmachine: unable to find current IP address of domain bridge-561731 in network mk-bridge-561731 (interfaces detected: [])
	I1027 23:02:06.984918  396818 retry.go:31] will retry after 4.063627442s: waiting for domain to come up
	W1027 23:02:09.656322  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:12.153815  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:13.278250  395987 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:02:13.278338  395987 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:02:13.278464  395987 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:02:13.278612  395987 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:02:13.278754  395987 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:02:13.278835  395987 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:02:13.280552  395987 out.go:252]   - Generating certificates and keys ...
	I1027 23:02:13.280678  395987 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:02:13.280763  395987 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:02:13.280864  395987 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:02:13.280956  395987 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:02:13.281048  395987 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:02:13.281126  395987 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:02:13.281205  395987 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:02:13.281361  395987 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-561731 localhost] and IPs [192.168.72.89 127.0.0.1 ::1]
	I1027 23:02:13.281445  395987 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:02:13.281609  395987 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-561731 localhost] and IPs [192.168.72.89 127.0.0.1 ::1]
	I1027 23:02:13.281724  395987 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:02:13.281816  395987 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:02:13.281911  395987 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:02:13.282002  395987 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:02:13.282081  395987 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:02:13.282158  395987 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:02:13.282243  395987 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:02:13.282360  395987 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:02:13.282454  395987 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:02:13.282565  395987 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:02:13.282682  395987 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:02:13.284217  395987 out.go:252]   - Booting up control plane ...
	I1027 23:02:13.284336  395987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:02:13.284450  395987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:02:13.284509  395987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:02:13.284609  395987 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:02:13.284776  395987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:02:13.284947  395987 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:02:13.285066  395987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:02:13.285132  395987 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:02:13.285292  395987 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:02:13.285398  395987 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:02:13.285461  395987 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002732407s
	I1027 23:02:13.285542  395987 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:02:13.285631  395987 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.89:8443/livez
	I1027 23:02:13.285761  395987 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:02:13.285861  395987 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:02:13.285978  395987 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.630685652s
	I1027 23:02:13.286075  395987 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.622478421s
	I1027 23:02:13.286184  395987 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003558549s
	I1027 23:02:13.286338  395987 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:02:13.286509  395987 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:02:13.286603  395987 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:02:13.286811  395987 kubeadm.go:319] [mark-control-plane] Marking the node flannel-561731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:02:13.286874  395987 kubeadm.go:319] [bootstrap-token] Using token: jmz94w.3ydqjiozyyra22wh
	I1027 23:02:13.289842  395987 out.go:252]   - Configuring RBAC rules ...
	I1027 23:02:13.289989  395987 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:02:13.290150  395987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:02:13.290367  395987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:02:13.290519  395987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:02:13.290635  395987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:02:13.290713  395987 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:02:13.290814  395987 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:02:13.290853  395987 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:02:13.290915  395987 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:02:13.290924  395987 kubeadm.go:319] 
	I1027 23:02:13.290988  395987 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:02:13.290994  395987 kubeadm.go:319] 
	I1027 23:02:13.291119  395987 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:02:13.291146  395987 kubeadm.go:319] 
	I1027 23:02:13.291192  395987 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:02:13.291293  395987 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:02:13.291356  395987 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:02:13.291364  395987 kubeadm.go:319] 
	I1027 23:02:13.291454  395987 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:02:13.291471  395987 kubeadm.go:319] 
	I1027 23:02:13.291541  395987 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:02:13.291553  395987 kubeadm.go:319] 
	I1027 23:02:13.291635  395987 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:02:13.291720  395987 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:02:13.291784  395987 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:02:13.291791  395987 kubeadm.go:319] 
	I1027 23:02:13.291866  395987 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:02:13.291955  395987 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:02:13.291967  395987 kubeadm.go:319] 
	I1027 23:02:13.292037  395987 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jmz94w.3ydqjiozyyra22wh \
	I1027 23:02:13.292129  395987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c \
	I1027 23:02:13.292148  395987 kubeadm.go:319] 	--control-plane 
	I1027 23:02:13.292152  395987 kubeadm.go:319] 
	I1027 23:02:13.292221  395987 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:02:13.292228  395987 kubeadm.go:319] 
	I1027 23:02:13.292293  395987 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jmz94w.3ydqjiozyyra22wh \
	I1027 23:02:13.292408  395987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c 
	I1027 23:02:13.292428  395987 cni.go:84] Creating CNI manager for "flannel"
	I1027 23:02:13.294085  395987 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1027 23:02:11.050206  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.051278  396818 main.go:143] libmachine: domain bridge-561731 has current primary IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.051299  396818 main.go:143] libmachine: found domain IP: 192.168.83.135
	I1027 23:02:11.051308  396818 main.go:143] libmachine: reserving static IP address...
	I1027 23:02:11.051731  396818 main.go:143] libmachine: unable to find host DHCP lease matching {name: "bridge-561731", mac: "52:54:00:4d:e7:d1", ip: "192.168.83.135"} in network mk-bridge-561731
	I1027 23:02:11.331605  396818 main.go:143] libmachine: reserved static IP address 192.168.83.135 for domain bridge-561731
	I1027 23:02:11.331631  396818 main.go:143] libmachine: waiting for SSH...
	I1027 23:02:11.331640  396818 main.go:143] libmachine: Getting to WaitForSSH function...
	I1027 23:02:11.336062  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.336828  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.336880  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.337199  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:11.337524  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:11.337546  396818 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1027 23:02:11.458295  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:02:11.458675  396818 main.go:143] libmachine: domain creation complete
	I1027 23:02:11.460414  396818 machine.go:94] provisionDockerMachine start ...
	I1027 23:02:11.463561  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.464089  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.464134  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.464335  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:11.464589  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:11.464601  396818 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:02:11.577207  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 23:02:11.577262  396818 buildroot.go:166] provisioning hostname "bridge-561731"
	I1027 23:02:11.580767  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.581166  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.581202  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.581383  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:11.581596  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:11.581607  396818 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-561731 && echo "bridge-561731" | sudo tee /etc/hostname
	I1027 23:02:11.715046  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-561731
	
	I1027 23:02:11.718411  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.718990  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.719037  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.719273  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:11.719539  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:11.719560  396818 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-561731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-561731/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-561731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:02:11.841681  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:02:11.841714  396818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 23:02:11.841773  396818 buildroot.go:174] setting up certificates
	I1027 23:02:11.841788  396818 provision.go:84] configureAuth start
	I1027 23:02:11.845426  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.846008  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.846037  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.848452  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.848795  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:11.848816  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:11.849002  396818 provision.go:143] copyHostCerts
	I1027 23:02:11.849067  396818 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem, removing ...
	I1027 23:02:11.849090  396818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem
	I1027 23:02:11.849170  396818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 23:02:11.849278  396818 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem, removing ...
	I1027 23:02:11.849287  396818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem
	I1027 23:02:11.849314  396818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 23:02:11.849374  396818 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem, removing ...
	I1027 23:02:11.849383  396818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem
	I1027 23:02:11.849406  396818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 23:02:11.849461  396818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.bridge-561731 san=[127.0.0.1 192.168.83.135 bridge-561731 localhost minikube]
	I1027 23:02:12.121800  396818 provision.go:177] copyRemoteCerts
	I1027 23:02:12.121871  396818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:02:12.124640  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.125115  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.125145  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.125271  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:12.213145  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:02:12.245872  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 23:02:12.283683  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:02:12.320345  396818 provision.go:87] duration metric: took 478.534981ms to configureAuth
	I1027 23:02:12.320383  396818 buildroot.go:189] setting minikube options for container-runtime
	I1027 23:02:12.320609  396818 config.go:182] Loaded profile config "bridge-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:02:12.324181  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.324549  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.324573  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.324785  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:12.325017  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:12.325033  396818 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:02:12.606996  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:02:12.607036  396818 machine.go:97] duration metric: took 1.146601573s to provisionDockerMachine
	I1027 23:02:12.607050  396818 client.go:176] duration metric: took 21.807887462s to LocalClient.Create
	I1027 23:02:12.607083  396818 start.go:167] duration metric: took 21.807973776s to libmachine.API.Create "bridge-561731"
	I1027 23:02:12.607093  396818 start.go:293] postStartSetup for "bridge-561731" (driver="kvm2")
	I1027 23:02:12.607110  396818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:02:12.607199  396818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:02:12.610991  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.611537  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.611573  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.611750  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:12.707697  396818 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:02:12.715167  396818 info.go:137] Remote host: Buildroot 2025.02
	I1027 23:02:12.715208  396818 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 23:02:12.715297  396818 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 23:02:12.715409  396818 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem -> 3566212.pem in /etc/ssl/certs
	I1027 23:02:12.715551  396818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:02:12.733751  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 23:02:12.771079  396818 start.go:296] duration metric: took 163.966982ms for postStartSetup
	I1027 23:02:12.775258  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.776017  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.776073  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.776405  396818 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/config.json ...
	I1027 23:02:12.776747  396818 start.go:128] duration metric: took 21.980248427s to createHost
	I1027 23:02:12.779923  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.780437  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.780479  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.780711  396818 main.go:143] libmachine: Using SSH client type: native
	I1027 23:02:12.781013  396818 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.135 22 <nil> <nil>}
	I1027 23:02:12.781039  396818 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 23:02:12.901677  396818 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761606132.866005619
	
	I1027 23:02:12.901707  396818 fix.go:217] guest clock: 1761606132.866005619
	I1027 23:02:12.901718  396818 fix.go:230] Guest: 2025-10-27 23:02:12.866005619 +0000 UTC Remote: 2025-10-27 23:02:12.77677298 +0000 UTC m=+38.042667909 (delta=89.232639ms)
	I1027 23:02:12.901740  396818 fix.go:201] guest clock delta is within tolerance: 89.232639ms
	I1027 23:02:12.901748  396818 start.go:83] releasing machines lock for "bridge-561731", held for 22.105436631s
	I1027 23:02:12.905183  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.905684  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.905723  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.906465  396818 ssh_runner.go:195] Run: cat /version.json
	I1027 23:02:12.906574  396818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:02:12.910250  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.910379  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.910809  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.910862  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:12.910917  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.910925  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:12.911157  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:12.911381  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:12.997106  396818 ssh_runner.go:195] Run: systemctl --version
	I1027 23:02:13.024952  396818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:02:13.192922  396818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:02:13.201014  396818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:02:13.201097  396818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:02:13.223681  396818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 23:02:13.223709  396818 start.go:496] detecting cgroup driver to use...
	I1027 23:02:13.223814  396818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:02:13.249376  396818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:02:13.267731  396818 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:02:13.267823  396818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:02:13.298460  396818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:02:13.317716  396818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:02:13.473966  396818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:02:13.723662  396818 docker.go:234] disabling docker service ...
	I1027 23:02:13.723743  396818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:02:13.744112  396818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:02:13.762164  396818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:02:13.934981  396818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:02:14.101035  396818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:02:14.119233  396818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:02:14.151808  396818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:02:14.151908  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.167208  396818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:02:14.167288  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.182935  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.196166  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.210250  396818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:02:14.224810  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.238241  396818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.260977  396818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:02:14.275210  396818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:02:14.287322  396818 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 23:02:14.287391  396818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 23:02:14.309760  396818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:02:14.324607  396818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:02:14.481461  396818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:02:14.619198  396818 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:02:14.619296  396818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:02:14.626195  396818 start.go:564] Will wait 60s for crictl version
	I1027 23:02:14.626301  396818 ssh_runner.go:195] Run: which crictl
	I1027 23:02:14.633658  396818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 23:02:14.684194  396818 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 23:02:14.684293  396818 ssh_runner.go:195] Run: crio --version
	I1027 23:02:14.719407  396818 ssh_runner.go:195] Run: crio --version
	I1027 23:02:14.754087  396818 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 23:02:14.758695  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:14.759184  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:14.759216  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:14.759439  396818 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1027 23:02:14.764882  396818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:02:14.781907  396818 kubeadm.go:884] updating cluster {Name:bridge-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:bridge-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.83.135 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:02:14.782041  396818 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:02:14.782111  396818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:02:13.295551  395987 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:02:13.306685  395987 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:02:13.306713  395987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1027 23:02:13.340194  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:02:13.830808  395987 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:02:13.830920  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:13.830962  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-561731 minikube.k8s.io/updated_at=2025_10_27T23_02_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=flannel-561731 minikube.k8s.io/primary=true
	I1027 23:02:13.856103  395987 ops.go:34] apiserver oom_adj: -16
	I1027 23:02:14.048808  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:14.549141  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:15.048967  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:15.549759  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1027 23:02:14.154144  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:16.157570  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:16.049740  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:16.549337  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:17.049819  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:17.549733  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:18.048973  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:18.548864  395987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:18.689963  395987 kubeadm.go:1114] duration metric: took 4.859132128s to wait for elevateKubeSystemPrivileges
	I1027 23:02:18.690014  395987 kubeadm.go:403] duration metric: took 19.832440069s to StartCluster
	I1027 23:02:18.690043  395987 settings.go:142] acquiring lock: {Name:mk9b0cd8ae1e83c76c2473e7845967d905910c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:18.690143  395987 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 23:02:18.692217  395987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:18.692564  395987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:02:18.692553  395987 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:02:18.692585  395987 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:02:18.692687  395987 addons.go:69] Setting storage-provisioner=true in profile "flannel-561731"
	I1027 23:02:18.692711  395987 addons.go:69] Setting default-storageclass=true in profile "flannel-561731"
	I1027 23:02:18.692724  395987 addons.go:238] Setting addon storage-provisioner=true in "flannel-561731"
	I1027 23:02:18.692744  395987 config.go:182] Loaded profile config "flannel-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:02:18.692749  395987 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-561731"
	I1027 23:02:18.692757  395987 host.go:66] Checking if "flannel-561731" exists ...
	I1027 23:02:18.695964  395987 out.go:179] * Verifying Kubernetes components...
	I1027 23:02:18.696122  395987 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:02:14.826104  396818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 23:02:14.826200  396818 ssh_runner.go:195] Run: which lz4
	I1027 23:02:14.831336  396818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 23:02:14.837073  396818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 23:02:14.837121  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 23:02:16.641283  396818 crio.go:462] duration metric: took 1.810022943s to copy over tarball
	I1027 23:02:16.641380  396818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 23:02:18.502795  396818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.861364313s)
	I1027 23:02:18.502840  396818 crio.go:469] duration metric: took 1.861516808s to extract the tarball
	I1027 23:02:18.502851  396818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 23:02:18.548841  396818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:02:18.611129  396818 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:02:18.611165  396818 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:02:18.611176  396818 kubeadm.go:935] updating node { 192.168.83.135 8443 v1.34.1 crio true true} ...
	I1027 23:02:18.611303  396818 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-561731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:bridge-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1027 23:02:18.611483  396818 ssh_runner.go:195] Run: crio config
	I1027 23:02:18.677619  396818 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:02:18.677667  396818 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:02:18.677690  396818 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.135 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-561731 NodeName:bridge-561731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:02:18.677823  396818 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-561731"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:02:18.677898  396818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:02:18.694781  396818 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:02:18.694859  396818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:02:18.710296  396818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 23:02:18.734930  396818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:02:18.762878  396818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 23:02:18.790471  396818 ssh_runner.go:195] Run: grep 192.168.83.135	control-plane.minikube.internal$ /etc/hosts
	I1027 23:02:18.795640  396818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:02:18.813121  396818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:02:18.960540  396818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:02:18.996008  396818 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731 for IP: 192.168.83.135
	I1027 23:02:18.996035  396818 certs.go:195] generating shared ca certs ...
	I1027 23:02:18.996057  396818 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:18.996286  396818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 23:02:18.996353  396818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 23:02:18.996367  396818 certs.go:257] generating profile certs ...
	I1027 23:02:18.996469  396818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.key
	I1027 23:02:18.996493  396818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.crt with IP's: []
	I1027 23:02:19.092219  396818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.crt ...
	I1027 23:02:19.092255  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.crt: {Name:mkab920d5b30dfc6653f8f3e461749299ed0eb9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.092480  396818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.key ...
	I1027 23:02:19.092496  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/client.key: {Name:mk97e0e9fa94cfbcf37b2d01054e0a500b7bdfbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.092617  396818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key.3cdb1c7b
	I1027 23:02:19.092634  396818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt.3cdb1c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.135]
	I1027 23:02:19.174539  396818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt.3cdb1c7b ...
	I1027 23:02:19.174577  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt.3cdb1c7b: {Name:mk670b85ddc8e6f8f62255e330a50d68be5f08ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.174777  396818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key.3cdb1c7b ...
	I1027 23:02:19.174795  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key.3cdb1c7b: {Name:mk3e2c4562a77f5b1a96aade2de8eb4fc7ba2c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.174921  396818 certs.go:382] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt.3cdb1c7b -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt
	I1027 23:02:19.175023  396818 certs.go:386] copying /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key.3cdb1c7b -> /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key
	I1027 23:02:19.175108  396818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.key
	I1027 23:02:19.175129  396818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.crt with IP's: []
	I1027 23:02:19.402815  396818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.crt ...
	I1027 23:02:19.402850  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.crt: {Name:mk3f516b7812221b2c42159bf2824adf87bba1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.403078  396818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.key ...
	I1027 23:02:19.403101  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.key: {Name:mk0368d54081215a830a106011acf1caf079089e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:19.403343  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 23:02:19.403382  396818 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 23:02:19.403394  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:02:19.403426  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:02:19.403458  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:02:19.403489  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 23:02:19.403548  396818 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 23:02:19.404222  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:02:19.446227  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:02:19.483938  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:02:19.519937  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 23:02:19.562138  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 23:02:19.609878  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:02:19.647141  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:02:19.696238  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/bridge-561731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:02:19.731107  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:02:19.775522  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 23:02:19.817458  396818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 23:02:19.854494  396818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:02:19.879448  396818 ssh_runner.go:195] Run: openssl version
	I1027 23:02:19.889357  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 23:02:19.909150  396818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 23:02:19.915757  396818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 23:02:19.915838  396818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 23:02:19.924183  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 23:02:19.939526  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 23:02:19.954794  396818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 23:02:19.962521  396818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 23:02:19.962603  396818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 23:02:19.971519  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:02:19.989941  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:02:20.006515  396818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:02:20.013445  396818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:02:20.013530  396818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:02:20.021976  396818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:02:20.038830  396818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:02:20.044786  396818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:02:20.044863  396818 kubeadm.go:401] StartCluster: {Name:bridge-561731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:bridge-561731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.83.135 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:02:20.044957  396818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:02:20.045013  396818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:02:20.093317  396818 cri.go:89] found id: ""
	I1027 23:02:20.093408  396818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:02:20.107368  396818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:02:20.126760  396818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:02:20.142563  396818 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:02:20.142588  396818 kubeadm.go:158] found existing configuration files:
	
	I1027 23:02:20.142656  396818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:02:20.157874  396818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:02:20.157962  396818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:02:20.174171  396818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:02:20.188424  396818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:02:20.188502  396818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:02:20.204606  396818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:02:20.217249  396818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:02:20.217325  396818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:02:20.230863  396818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:02:20.244517  396818 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:02:20.244595  396818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:02:20.260017  396818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 23:02:20.323689  396818 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:02:20.323784  396818 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:02:20.441992  396818 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:02:20.442195  396818 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:02:20.442392  396818 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:02:20.460918  396818 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:02:18.696760  395987 addons.go:238] Setting addon default-storageclass=true in "flannel-561731"
	I1027 23:02:18.696792  395987 host.go:66] Checking if "flannel-561731" exists ...
	I1027 23:02:18.697406  395987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:02:18.697426  395987 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:02:18.697442  395987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:02:18.698791  395987 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:02:18.698814  395987 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:02:18.701544  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:02:18.702055  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:02:18.702089  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:02:18.702297  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:02:18.702551  395987 main.go:143] libmachine: domain flannel-561731 has defined MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:02:18.703107  395987 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:2f:14", ip: ""} in network mk-flannel-561731: {Iface:virbr4 ExpiryTime:2025-10-28 00:01:48 +0000 UTC Type:0 Mac:52:54:00:18:2f:14 Iaid: IPaddr:192.168.72.89 Prefix:24 Hostname:flannel-561731 Clientid:01:52:54:00:18:2f:14}
	I1027 23:02:18.703133  395987 main.go:143] libmachine: domain flannel-561731 has defined IP address 192.168.72.89 and MAC address 52:54:00:18:2f:14 in network mk-flannel-561731
	I1027 23:02:18.703317  395987 sshutil.go:53] new ssh client: &{IP:192.168.72.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/flannel-561731/id_rsa Username:docker}
	I1027 23:02:19.117398  395987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:02:19.117402  395987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:02:19.157850  395987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:02:19.380866  395987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:02:20.534540  395987 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.416998194s)
	I1027 23:02:20.534634  395987 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.417186894s)
	I1027 23:02:20.534664  395987 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1027 23:02:20.535884  395987 node_ready.go:35] waiting up to 15m0s for node "flannel-561731" to be "Ready" ...
	I1027 23:02:21.386105  395987 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-561731" context rescaled to 1 replicas
	I1027 23:02:21.534954  395987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.377052659s)
	I1027 23:02:21.535029  395987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.154119662s)
	I1027 23:02:21.554775  395987 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1027 23:02:18.658143  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:20.801243  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:20.678519  396818 out.go:252]   - Generating certificates and keys ...
	I1027 23:02:20.678643  396818 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:02:20.678775  396818 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:02:20.678927  396818 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:02:21.055027  396818 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:02:21.181947  396818 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:02:21.263169  396818 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:02:21.361754  396818 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:02:21.361987  396818 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-561731 localhost] and IPs [192.168.83.135 127.0.0.1 ::1]
	I1027 23:02:21.662143  396818 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:02:21.662380  396818 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-561731 localhost] and IPs [192.168.83.135 127.0.0.1 ::1]
	I1027 23:02:21.994960  396818 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:02:22.428150  396818 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:02:22.638205  396818 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:02:22.638296  396818 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:02:22.680818  396818 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:02:22.784846  396818 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:02:22.977925  396818 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:02:23.914694  396818 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:02:24.209769  396818 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:02:24.211332  396818 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:02:24.216137  396818 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:02:24.217456  396818 out.go:252]   - Booting up control plane ...
	I1027 23:02:24.217594  396818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:02:24.218196  396818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:02:24.219197  396818 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:02:24.240404  396818 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:02:24.241051  396818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:02:24.249139  396818 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:02:24.249775  396818 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:02:24.249846  396818 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:02:24.441744  396818 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:02:24.442089  396818 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:02:21.556038  395987 addons.go:514] duration metric: took 2.863446245s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1027 23:02:22.540363  395987 node_ready.go:57] node "flannel-561731" has "Ready":"False" status (will retry)
	W1027 23:02:24.546121  395987 node_ready.go:57] node "flannel-561731" has "Ready":"False" status (will retry)
	W1027 23:02:23.152845  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:25.153655  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:27.156119  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:24.943392  396818 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.698127ms
	I1027 23:02:24.948322  396818 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:02:24.948409  396818 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.83.135:8443/livez
	I1027 23:02:24.948524  396818 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:02:24.948640  396818 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:02:26.750671  396818 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.803526637s
	I1027 23:02:29.537353  396818 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.591778179s
	W1027 23:02:27.043730  395987 node_ready.go:57] node "flannel-561731" has "Ready":"False" status (will retry)
	I1027 23:02:28.052366  395987 node_ready.go:49] node "flannel-561731" is "Ready"
	I1027 23:02:28.052430  395987 node_ready.go:38] duration metric: took 7.516482609s for node "flannel-561731" to be "Ready" ...
	I1027 23:02:28.052456  395987 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:02:28.052529  395987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:02:28.119544  395987 api_server.go:72] duration metric: took 9.426857861s to wait for apiserver process to appear ...
	I1027 23:02:28.119580  395987 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:02:28.119610  395987 api_server.go:253] Checking apiserver healthz at https://192.168.72.89:8443/healthz ...
	I1027 23:02:28.128757  395987 api_server.go:279] https://192.168.72.89:8443/healthz returned 200:
	ok
	I1027 23:02:28.129996  395987 api_server.go:141] control plane version: v1.34.1
	I1027 23:02:28.130034  395987 api_server.go:131] duration metric: took 10.44369ms to wait for apiserver health ...
	I1027 23:02:28.130047  395987 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:02:28.137358  395987 system_pods.go:59] 7 kube-system pods found
	I1027 23:02:28.137411  395987 system_pods.go:61] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:28.137427  395987 system_pods.go:61] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:28.137437  395987 system_pods.go:61] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:28.137443  395987 system_pods.go:61] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:28.137450  395987 system_pods.go:61] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:28.137455  395987 system_pods.go:61] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:28.137463  395987 system_pods.go:61] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:28.137475  395987 system_pods.go:74] duration metric: took 7.421032ms to wait for pod list to return data ...
	I1027 23:02:28.137486  395987 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:02:28.139734  395987 default_sa.go:45] found service account: "default"
	I1027 23:02:28.139764  395987 default_sa.go:55] duration metric: took 2.271111ms for default service account to be created ...
	I1027 23:02:28.139779  395987 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:02:28.150741  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:28.150785  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:28.150791  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:28.150808  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:28.150816  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:28.150822  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:28.150827  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:28.150842  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:28.150868  395987 retry.go:31] will retry after 267.904027ms: missing components: kube-dns
	I1027 23:02:28.430919  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:28.430963  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:28.430973  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:28.430982  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:28.430988  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:28.431005  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:28.431011  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:28.431019  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:28.431051  395987 retry.go:31] will retry after 348.276262ms: missing components: kube-dns
	I1027 23:02:28.784920  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:28.784960  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:28.784965  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:28.784971  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:28.784975  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:28.784978  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:28.784981  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:28.784988  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:28.785015  395987 retry.go:31] will retry after 408.555189ms: missing components: kube-dns
	I1027 23:02:29.199307  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:29.199344  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:29.199350  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:29.199356  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:29.199360  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:29.199363  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:29.199366  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:29.199369  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:29.199387  395987 retry.go:31] will retry after 592.362685ms: missing components: kube-dns
	I1027 23:02:29.797406  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:29.797445  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:29.797455  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:29.797465  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:29.797472  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:29.797477  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:29.797481  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:29.797486  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:29.797507  395987 retry.go:31] will retry after 734.672048ms: missing components: kube-dns
	I1027 23:02:30.538204  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:30.538245  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:30.538253  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:30.538264  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:30.538268  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:30.538273  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:30.538278  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:30.538284  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:30.538304  395987 retry.go:31] will retry after 719.24351ms: missing components: kube-dns
	I1027 23:02:31.447061  396818 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501907823s
	I1027 23:02:31.461925  396818 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:02:31.517002  396818 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:02:31.547921  396818 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:02:31.548215  396818 kubeadm.go:319] [mark-control-plane] Marking the node bridge-561731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:02:31.572026  396818 kubeadm.go:319] [bootstrap-token] Using token: edkc3g.yz3140370q14zoio
	W1027 23:02:29.659106  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	W1027 23:02:31.660189  395132 pod_ready.go:104] pod "coredns-66bc5c9577-wkkjh" is not "Ready", error: <nil>
	I1027 23:02:32.664236  395132 pod_ready.go:94] pod "coredns-66bc5c9577-wkkjh" is "Ready"
	I1027 23:02:32.664286  395132 pod_ready.go:86] duration metric: took 32.017453901s for pod "coredns-66bc5c9577-wkkjh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:32.670332  395132 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:32.675808  395132 pod_ready.go:94] pod "etcd-enable-default-cni-561731" is "Ready"
	I1027 23:02:32.675844  395132 pod_ready.go:86] duration metric: took 5.483652ms for pod "etcd-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:32.680140  395132 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:32.691493  395132 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-561731" is "Ready"
	I1027 23:02:32.691531  395132 pod_ready.go:86] duration metric: took 11.361422ms for pod "kube-apiserver-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:32.771702  395132 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:31.573644  396818 out.go:252]   - Configuring RBAC rules ...
	I1027 23:02:31.573831  396818 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:02:31.588506  396818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:02:31.599007  396818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:02:31.604821  396818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:02:31.613454  396818 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:02:31.617347  396818 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:02:31.855540  396818 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:02:32.353151  396818 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:02:32.856165  396818 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:02:32.857977  396818 kubeadm.go:319] 
	I1027 23:02:32.858066  396818 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:02:32.858116  396818 kubeadm.go:319] 
	I1027 23:02:32.858272  396818 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:02:32.858303  396818 kubeadm.go:319] 
	I1027 23:02:32.858353  396818 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:02:32.858434  396818 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:02:32.858518  396818 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:02:32.858528  396818 kubeadm.go:319] 
	I1027 23:02:32.858610  396818 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:02:32.858626  396818 kubeadm.go:319] 
	I1027 23:02:32.858738  396818 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:02:32.858757  396818 kubeadm.go:319] 
	I1027 23:02:32.858809  396818 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:02:32.858916  396818 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:02:32.859029  396818 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:02:32.859047  396818 kubeadm.go:319] 
	I1027 23:02:32.859175  396818 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:02:32.859309  396818 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:02:32.859337  396818 kubeadm.go:319] 
	I1027 23:02:32.859462  396818 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token edkc3g.yz3140370q14zoio \
	I1027 23:02:32.859563  396818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c \
	I1027 23:02:32.859584  396818 kubeadm.go:319] 	--control-plane 
	I1027 23:02:32.859589  396818 kubeadm.go:319] 
	I1027 23:02:32.859659  396818 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:02:32.859667  396818 kubeadm.go:319] 
	I1027 23:02:32.859746  396818 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token edkc3g.yz3140370q14zoio \
	I1027 23:02:32.859833  396818 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cb8ddfbc3ba5a862ece84051047c6250a766e6f4afeb4ad0a97b6e833be7e0c 
	I1027 23:02:32.860592  396818 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:02:32.860619  396818 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:02:32.862490  396818 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 23:02:32.851982  395132 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-561731" is "Ready"
	I1027 23:02:32.852020  395132 pod_ready.go:86] duration metric: took 80.285981ms for pod "kube-controller-manager-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:33.051373  395132 pod_ready.go:83] waiting for pod "kube-proxy-rrqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:33.451976  395132 pod_ready.go:94] pod "kube-proxy-rrqzw" is "Ready"
	I1027 23:02:33.452007  395132 pod_ready.go:86] duration metric: took 400.592376ms for pod "kube-proxy-rrqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:33.652103  395132 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:34.051750  395132 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-561731" is "Ready"
	I1027 23:02:34.051783  395132 pod_ready.go:86] duration metric: took 399.641898ms for pod "kube-scheduler-enable-default-cni-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:34.051802  395132 pod_ready.go:40] duration metric: took 33.410889926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:34.105346  395132 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 23:02:34.107187  395132 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-561731" cluster and "default" namespace by default
	I1027 23:02:32.863951  396818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 23:02:32.887770  396818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 23:02:32.919345  396818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:02:32.919464  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:32.919510  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-561731 minikube.k8s.io/updated_at=2025_10_27T23_02_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=bridge-561731 minikube.k8s.io/primary=true
	I1027 23:02:32.987920  396818 ops.go:34] apiserver oom_adj: -16
	I1027 23:02:33.077813  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:33.578883  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:34.078623  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:34.578965  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:31.263492  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:31.263537  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:31.263544  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:31.263551  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:31.263555  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:31.263559  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:31.263562  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:31.263565  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:31.263584  395987 retry.go:31] will retry after 1.132958333s: missing components: kube-dns
	I1027 23:02:32.404656  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:32.404691  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:32.404697  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:32.404705  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:32.404708  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:32.404712  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:32.404715  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:32.404718  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:32.404739  395987 retry.go:31] will retry after 1.414843839s: missing components: kube-dns
	I1027 23:02:33.825258  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:33.825297  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:33.825305  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:33.825314  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:33.825319  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:33.825324  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:33.825336  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:33.825342  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:33.825361  395987 retry.go:31] will retry after 1.626200969s: missing components: kube-dns
	I1027 23:02:35.458095  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:35.458139  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:35.458147  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:35.458158  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:35.458164  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:35.458169  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:35.458182  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:35.458190  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:35.458207  395987 retry.go:31] will retry after 1.940342024s: missing components: kube-dns
	I1027 23:02:35.078072  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:35.578700  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:36.078072  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:36.578652  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:37.078031  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:37.578583  396818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:02:37.689368  396818 kubeadm.go:1114] duration metric: took 4.769960117s to wait for elevateKubeSystemPrivileges
	I1027 23:02:37.689420  396818 kubeadm.go:403] duration metric: took 17.644560477s to StartCluster
	I1027 23:02:37.689452  396818 settings.go:142] acquiring lock: {Name:mk9b0cd8ae1e83c76c2473e7845967d905910c67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:37.689562  396818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 23:02:37.691803  396818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/kubeconfig: {Name:mkf142c57fc1d516984237b4e01b6acd26119765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:02:37.692115  396818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:02:37.692126  396818 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.135 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:02:37.692217  396818 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:02:37.692319  396818 addons.go:69] Setting storage-provisioner=true in profile "bridge-561731"
	I1027 23:02:37.692330  396818 config.go:182] Loaded profile config "bridge-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:02:37.692364  396818 addons.go:69] Setting default-storageclass=true in profile "bridge-561731"
	I1027 23:02:37.692340  396818 addons.go:238] Setting addon storage-provisioner=true in "bridge-561731"
	I1027 23:02:37.692379  396818 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-561731"
	I1027 23:02:37.692427  396818 host.go:66] Checking if "bridge-561731" exists ...
	I1027 23:02:37.693505  396818 out.go:179] * Verifying Kubernetes components...
	I1027 23:02:37.695102  396818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:02:37.697144  396818 addons.go:238] Setting addon default-storageclass=true in "bridge-561731"
	I1027 23:02:37.697193  396818 host.go:66] Checking if "bridge-561731" exists ...
	I1027 23:02:37.697903  396818 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:02:37.699078  396818 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:02:37.699102  396818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:02:37.699393  396818 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:02:37.699413  396818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:02:37.702914  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:37.703190  396818 main.go:143] libmachine: domain bridge-561731 has defined MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:37.703435  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:37.703469  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:37.703653  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:37.703939  396818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4d:e7:d1", ip: ""} in network mk-bridge-561731: {Iface:virbr5 ExpiryTime:2025-10-28 00:02:08 +0000 UTC Type:0 Mac:52:54:00:4d:e7:d1 Iaid: IPaddr:192.168.83.135 Prefix:24 Hostname:bridge-561731 Clientid:01:52:54:00:4d:e7:d1}
	I1027 23:02:37.703978  396818 main.go:143] libmachine: domain bridge-561731 has defined IP address 192.168.83.135 and MAC address 52:54:00:4d:e7:d1 in network mk-bridge-561731
	I1027 23:02:37.704168  396818 sshutil.go:53] new ssh client: &{IP:192.168.83.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/bridge-561731/id_rsa Username:docker}
	I1027 23:02:38.063916  396818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:02:38.187232  396818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:02:38.521971  396818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:02:38.581154  396818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:02:39.428132  396818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.364162928s)
	I1027 23:02:39.428162  396818 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.240883821s)
	I1027 23:02:39.428181  396818 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1027 23:02:39.429670  396818 node_ready.go:35] waiting up to 15m0s for node "bridge-561731" to be "Ready" ...
	I1027 23:02:39.463039  396818 node_ready.go:49] node "bridge-561731" is "Ready"
	I1027 23:02:39.463081  396818 node_ready.go:38] duration metric: took 33.373532ms for node "bridge-561731" to be "Ready" ...
	I1027 23:02:39.463100  396818 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:02:39.463162  396818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:02:39.846950  396818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.324925413s)
	I1027 23:02:39.847000  396818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.265805912s)
	I1027 23:02:39.847036  396818 api_server.go:72] duration metric: took 2.154866936s to wait for apiserver process to appear ...
	I1027 23:02:39.847054  396818 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:02:39.847081  396818 api_server.go:253] Checking apiserver healthz at https://192.168.83.135:8443/healthz ...
	I1027 23:02:39.863483  396818 api_server.go:279] https://192.168.83.135:8443/healthz returned 200:
	ok
	I1027 23:02:39.865020  396818 api_server.go:141] control plane version: v1.34.1
	I1027 23:02:39.865056  396818 api_server.go:131] duration metric: took 17.993298ms to wait for apiserver health ...
	I1027 23:02:39.865067  396818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:02:39.865448  396818 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:02:37.404650  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:37.404694  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:37.404702  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:37.404713  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:37.404719  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:37.404730  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:37.404735  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:37.404740  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:37.404760  395987 retry.go:31] will retry after 2.840620699s: missing components: kube-dns
	I1027 23:02:40.252103  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:40.252137  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.252145  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:40.252151  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:40.252155  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:40.252158  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:40.252161  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:40.252164  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:40.252182  395987 retry.go:31] will retry after 3.410443039s: missing components: kube-dns
	I1027 23:02:39.866802  396818 addons.go:514] duration metric: took 2.174580769s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:02:39.871969  396818 system_pods.go:59] 8 kube-system pods found
	I1027 23:02:39.872002  396818 system_pods.go:61] "coredns-66bc5c9577-pqfgs" [80bf796c-7131-44dc-8046-39e23d85da37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:39.872016  396818 system_pods.go:61] "coredns-66bc5c9577-wd6x6" [d1263e80-e68b-4dc5-9deb-95e2ea65162a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:39.872023  396818 system_pods.go:61] "etcd-bridge-561731" [5beea6e6-97b8-4051-bcdb-41e923d3688f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:02:39.872029  396818 system_pods.go:61] "kube-apiserver-bridge-561731" [675414dc-e022-4523-b79e-0ea8c78276cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:02:39.872033  396818 system_pods.go:61] "kube-controller-manager-bridge-561731" [2372e318-e3ab-4d88-865b-e6b77aaae219] Running
	I1027 23:02:39.872037  396818 system_pods.go:61] "kube-proxy-4nm7s" [915d8403-df14-4980-ab08-3268d1772763] Running
	I1027 23:02:39.872039  396818 system_pods.go:61] "kube-scheduler-bridge-561731" [e44b84a4-f8d4-4632-b5a6-5bb57372119b] Running
	I1027 23:02:39.872042  396818 system_pods.go:61] "storage-provisioner" [793319b3-c5cf-44fd-8184-09d1f9b30990] Pending
	I1027 23:02:39.872047  396818 system_pods.go:74] duration metric: took 6.974875ms to wait for pod list to return data ...
	I1027 23:02:39.872058  396818 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:02:39.877144  396818 default_sa.go:45] found service account: "default"
	I1027 23:02:39.877170  396818 default_sa.go:55] duration metric: took 5.106784ms for default service account to be created ...
	I1027 23:02:39.877180  396818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:02:39.885075  396818 system_pods.go:86] 8 kube-system pods found
	I1027 23:02:39.885107  396818 system_pods.go:89] "coredns-66bc5c9577-pqfgs" [80bf796c-7131-44dc-8046-39e23d85da37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:39.885115  396818 system_pods.go:89] "coredns-66bc5c9577-wd6x6" [d1263e80-e68b-4dc5-9deb-95e2ea65162a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:39.885121  396818 system_pods.go:89] "etcd-bridge-561731" [5beea6e6-97b8-4051-bcdb-41e923d3688f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:02:39.885127  396818 system_pods.go:89] "kube-apiserver-bridge-561731" [675414dc-e022-4523-b79e-0ea8c78276cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:02:39.885137  396818 system_pods.go:89] "kube-controller-manager-bridge-561731" [2372e318-e3ab-4d88-865b-e6b77aaae219] Running
	I1027 23:02:39.885141  396818 system_pods.go:89] "kube-proxy-4nm7s" [915d8403-df14-4980-ab08-3268d1772763] Running
	I1027 23:02:39.885145  396818 system_pods.go:89] "kube-scheduler-bridge-561731" [e44b84a4-f8d4-4632-b5a6-5bb57372119b] Running
	I1027 23:02:39.885149  396818 system_pods.go:89] "storage-provisioner" [793319b3-c5cf-44fd-8184-09d1f9b30990] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:39.885175  396818 retry.go:31] will retry after 203.226999ms: missing components: kube-dns
	I1027 23:02:39.933383  396818 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-561731" context rescaled to 1 replicas
	I1027 23:02:40.093175  396818 system_pods.go:86] 8 kube-system pods found
	I1027 23:02:40.093211  396818 system_pods.go:89] "coredns-66bc5c9577-pqfgs" [80bf796c-7131-44dc-8046-39e23d85da37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.093218  396818 system_pods.go:89] "coredns-66bc5c9577-wd6x6" [d1263e80-e68b-4dc5-9deb-95e2ea65162a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.093224  396818 system_pods.go:89] "etcd-bridge-561731" [5beea6e6-97b8-4051-bcdb-41e923d3688f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:02:40.093231  396818 system_pods.go:89] "kube-apiserver-bridge-561731" [675414dc-e022-4523-b79e-0ea8c78276cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:02:40.093235  396818 system_pods.go:89] "kube-controller-manager-bridge-561731" [2372e318-e3ab-4d88-865b-e6b77aaae219] Running
	I1027 23:02:40.093238  396818 system_pods.go:89] "kube-proxy-4nm7s" [915d8403-df14-4980-ab08-3268d1772763] Running
	I1027 23:02:40.093241  396818 system_pods.go:89] "kube-scheduler-bridge-561731" [e44b84a4-f8d4-4632-b5a6-5bb57372119b] Running
	I1027 23:02:40.093246  396818 system_pods.go:89] "storage-provisioner" [793319b3-c5cf-44fd-8184-09d1f9b30990] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:40.093261  396818 retry.go:31] will retry after 363.777673ms: missing components: kube-dns
	I1027 23:02:40.464200  396818 system_pods.go:86] 8 kube-system pods found
	I1027 23:02:40.464245  396818 system_pods.go:89] "coredns-66bc5c9577-pqfgs" [80bf796c-7131-44dc-8046-39e23d85da37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.464256  396818 system_pods.go:89] "coredns-66bc5c9577-wd6x6" [d1263e80-e68b-4dc5-9deb-95e2ea65162a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.464265  396818 system_pods.go:89] "etcd-bridge-561731" [5beea6e6-97b8-4051-bcdb-41e923d3688f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:02:40.464273  396818 system_pods.go:89] "kube-apiserver-bridge-561731" [675414dc-e022-4523-b79e-0ea8c78276cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:02:40.464280  396818 system_pods.go:89] "kube-controller-manager-bridge-561731" [2372e318-e3ab-4d88-865b-e6b77aaae219] Running
	I1027 23:02:40.464285  396818 system_pods.go:89] "kube-proxy-4nm7s" [915d8403-df14-4980-ab08-3268d1772763] Running
	I1027 23:02:40.464291  396818 system_pods.go:89] "kube-scheduler-bridge-561731" [e44b84a4-f8d4-4632-b5a6-5bb57372119b] Running
	I1027 23:02:40.464298  396818 system_pods.go:89] "storage-provisioner" [793319b3-c5cf-44fd-8184-09d1f9b30990] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:02:40.464321  396818 retry.go:31] will retry after 320.586268ms: missing components: kube-dns
	I1027 23:02:40.791530  396818 system_pods.go:86] 8 kube-system pods found
	I1027 23:02:40.791573  396818 system_pods.go:89] "coredns-66bc5c9577-pqfgs" [80bf796c-7131-44dc-8046-39e23d85da37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:02:40.791582  396818 system_pods.go:89] "coredns-66bc5c9577-wd6x6" [d1263e80-e68b-4dc5-9deb-95e2ea65162a] Running
	I1027 23:02:40.791591  396818 system_pods.go:89] "etcd-bridge-561731" [5beea6e6-97b8-4051-bcdb-41e923d3688f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:02:40.791597  396818 system_pods.go:89] "kube-apiserver-bridge-561731" [675414dc-e022-4523-b79e-0ea8c78276cd] Running
	I1027 23:02:40.791602  396818 system_pods.go:89] "kube-controller-manager-bridge-561731" [2372e318-e3ab-4d88-865b-e6b77aaae219] Running
	I1027 23:02:40.791607  396818 system_pods.go:89] "kube-proxy-4nm7s" [915d8403-df14-4980-ab08-3268d1772763] Running
	I1027 23:02:40.791611  396818 system_pods.go:89] "kube-scheduler-bridge-561731" [e44b84a4-f8d4-4632-b5a6-5bb57372119b] Running
	I1027 23:02:40.791616  396818 system_pods.go:89] "storage-provisioner" [793319b3-c5cf-44fd-8184-09d1f9b30990] Running
	I1027 23:02:40.791625  396818 system_pods.go:126] duration metric: took 914.439358ms to wait for k8s-apps to be running ...
	I1027 23:02:40.791640  396818 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:02:40.791705  396818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:02:40.811824  396818 system_svc.go:56] duration metric: took 20.174032ms WaitForService to wait for kubelet
	I1027 23:02:40.811862  396818 kubeadm.go:587] duration metric: took 3.119700052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:02:40.811882  396818 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:02:40.815536  396818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 23:02:40.815584  396818 node_conditions.go:123] node cpu capacity is 2
	I1027 23:02:40.815604  396818 node_conditions.go:105] duration metric: took 3.707399ms to run NodePressure ...
	I1027 23:02:40.815621  396818 start.go:242] waiting for startup goroutines ...
	I1027 23:02:40.815631  396818 start.go:247] waiting for cluster config update ...
	I1027 23:02:40.815652  396818 start.go:256] writing updated cluster config ...
	I1027 23:02:40.816075  396818 ssh_runner.go:195] Run: rm -f paused
	I1027 23:02:40.822402  396818 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:40.827471  396818 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pqfgs" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:02:42.835368  396818 pod_ready.go:104] pod "coredns-66bc5c9577-pqfgs" is not "Ready", error: <nil>
	I1027 23:02:43.669783  395987 system_pods.go:86] 7 kube-system pods found
	I1027 23:02:43.669824  395987 system_pods.go:89] "coredns-66bc5c9577-d75xh" [fd3daf84-244e-480b-84a4-393199dd5888] Running
	I1027 23:02:43.669846  395987 system_pods.go:89] "etcd-flannel-561731" [3ef455e7-b1e4-490f-8869-d15f232b3982] Running
	I1027 23:02:43.669852  395987 system_pods.go:89] "kube-apiserver-flannel-561731" [93939ed8-73ac-40e6-86e9-b82e744ed9b8] Running
	I1027 23:02:43.669858  395987 system_pods.go:89] "kube-controller-manager-flannel-561731" [3f297b8e-c563-4630-8476-6ad5fd61f6b3] Running
	I1027 23:02:43.669863  395987 system_pods.go:89] "kube-proxy-qjnfj" [7035361b-86b4-4d12-b39a-9f700b3ac594] Running
	I1027 23:02:43.669868  395987 system_pods.go:89] "kube-scheduler-flannel-561731" [e346aa71-ebe0-49dc-8a9d-d2682848c388] Running
	I1027 23:02:43.669876  395987 system_pods.go:89] "storage-provisioner" [aa0c54e6-4edc-454b-b872-b7e8696afd53] Running
	I1027 23:02:43.669904  395987 system_pods.go:126] duration metric: took 15.530100165s to wait for k8s-apps to be running ...
	I1027 23:02:43.669915  395987 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:02:43.669985  395987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:02:43.693422  395987 system_svc.go:56] duration metric: took 23.494197ms WaitForService to wait for kubelet
	I1027 23:02:43.693466  395987 kubeadm.go:587] duration metric: took 25.00078692s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:02:43.693497  395987 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:02:43.696698  395987 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 23:02:43.696729  395987 node_conditions.go:123] node cpu capacity is 2
	I1027 23:02:43.696748  395987 node_conditions.go:105] duration metric: took 3.243651ms to run NodePressure ...
	I1027 23:02:43.696766  395987 start.go:242] waiting for startup goroutines ...
	I1027 23:02:43.696775  395987 start.go:247] waiting for cluster config update ...
	I1027 23:02:43.696791  395987 start.go:256] writing updated cluster config ...
	I1027 23:02:43.697178  395987 ssh_runner.go:195] Run: rm -f paused
	I1027 23:02:43.704863  395987 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:43.710651  395987 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d75xh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.719446  395987 pod_ready.go:94] pod "coredns-66bc5c9577-d75xh" is "Ready"
	I1027 23:02:43.719479  395987 pod_ready.go:86] duration metric: took 8.784454ms for pod "coredns-66bc5c9577-d75xh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.723279  395987 pod_ready.go:83] waiting for pod "etcd-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.729506  395987 pod_ready.go:94] pod "etcd-flannel-561731" is "Ready"
	I1027 23:02:43.729534  395987 pod_ready.go:86] duration metric: took 6.216957ms for pod "etcd-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.732117  395987 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.739548  395987 pod_ready.go:94] pod "kube-apiserver-flannel-561731" is "Ready"
	I1027 23:02:43.739583  395987 pod_ready.go:86] duration metric: took 7.442527ms for pod "kube-apiserver-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:43.741905  395987 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:44.111321  395987 pod_ready.go:94] pod "kube-controller-manager-flannel-561731" is "Ready"
	I1027 23:02:44.111351  395987 pod_ready.go:86] duration metric: took 369.424206ms for pod "kube-controller-manager-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:44.310145  395987 pod_ready.go:83] waiting for pod "kube-proxy-qjnfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:44.709855  395987 pod_ready.go:94] pod "kube-proxy-qjnfj" is "Ready"
	I1027 23:02:44.709907  395987 pod_ready.go:86] duration metric: took 399.733607ms for pod "kube-proxy-qjnfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:44.910184  395987 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:45.309247  395987 pod_ready.go:94] pod "kube-scheduler-flannel-561731" is "Ready"
	I1027 23:02:45.309283  395987 pod_ready.go:86] duration metric: took 399.059678ms for pod "kube-scheduler-flannel-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:45.309298  395987 pod_ready.go:40] duration metric: took 1.604359242s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:45.356950  395987 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 23:02:45.359011  395987 out.go:179] * Done! kubectl is now configured to use "flannel-561731" cluster and "default" namespace by default
	W1027 23:02:45.334981  396818 pod_ready.go:104] pod "coredns-66bc5c9577-pqfgs" is not "Ready", error: <nil>
	I1027 23:02:46.832398  396818 pod_ready.go:99] pod "coredns-66bc5c9577-pqfgs" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-pqfgs" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-pqfgs" not found
	I1027 23:02:46.832433  396818 pod_ready.go:86] duration metric: took 6.0049214s for pod "coredns-66bc5c9577-pqfgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.832448  396818 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wd6x6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.840065  396818 pod_ready.go:94] pod "coredns-66bc5c9577-wd6x6" is "Ready"
	I1027 23:02:46.840095  396818 pod_ready.go:86] duration metric: took 7.640862ms for pod "coredns-66bc5c9577-wd6x6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.843387  396818 pod_ready.go:83] waiting for pod "etcd-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.851298  396818 pod_ready.go:94] pod "etcd-bridge-561731" is "Ready"
	I1027 23:02:46.851344  396818 pod_ready.go:86] duration metric: took 7.917098ms for pod "etcd-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.854653  396818 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.861237  396818 pod_ready.go:94] pod "kube-apiserver-bridge-561731" is "Ready"
	I1027 23:02:46.861277  396818 pod_ready.go:86] duration metric: took 6.586545ms for pod "kube-apiserver-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:46.865221  396818 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:47.233042  396818 pod_ready.go:94] pod "kube-controller-manager-bridge-561731" is "Ready"
	I1027 23:02:47.233078  396818 pod_ready.go:86] duration metric: took 367.488879ms for pod "kube-controller-manager-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:47.432005  396818 pod_ready.go:83] waiting for pod "kube-proxy-4nm7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:47.831609  396818 pod_ready.go:94] pod "kube-proxy-4nm7s" is "Ready"
	I1027 23:02:47.831639  396818 pod_ready.go:86] duration metric: took 399.597441ms for pod "kube-proxy-4nm7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:48.032739  396818 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:48.431515  396818 pod_ready.go:94] pod "kube-scheduler-bridge-561731" is "Ready"
	I1027 23:02:48.431552  396818 pod_ready.go:86] duration metric: took 398.781553ms for pod "kube-scheduler-bridge-561731" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:02:48.431567  396818 pod_ready.go:40] duration metric: took 7.609119819s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:02:48.480170  396818 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 23:02:48.481825  396818 out.go:179] * Done! kubectl is now configured to use "bridge-561731" cluster and "default" namespace by default
	I1027 23:05:33.651551  387237 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	I1027 23:05:33.651601  387237 kubeadm.go:319] 
	I1027 23:05:33.651725  387237 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1027 23:05:33.651825  387237 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1027 23:05:33.651942  387237 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1027 23:05:33.652025  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1027 23:05:33.652145  387237 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1027 23:05:33.652266  387237 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1027 23:05:33.652278  387237 kubeadm.go:319] 
	I1027 23:05:33.653959  387237 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:05:33.654245  387237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	I1027 23:05:33.654352  387237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1027 23:05:33.654424  387237 kubeadm.go:403] duration metric: took 12m13.667806501s to StartCluster
	I1027 23:05:33.654520  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 23:05:33.654613  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 23:05:33.709824  387237 cri.go:89] found id: ""
	I1027 23:05:33.709879  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.709905  387237 logs.go:284] No container was found matching "kube-apiserver"
	I1027 23:05:33.709915  387237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 23:05:33.710009  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 23:05:33.749859  387237 cri.go:89] found id: ""
	I1027 23:05:33.749909  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.749921  387237 logs.go:284] No container was found matching "etcd"
	I1027 23:05:33.749932  387237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 23:05:33.749987  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 23:05:33.795987  387237 cri.go:89] found id: ""
	I1027 23:05:33.796025  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.796036  387237 logs.go:284] No container was found matching "coredns"
	I1027 23:05:33.796044  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 23:05:33.796173  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 23:05:33.835683  387237 cri.go:89] found id: "195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf"
	I1027 23:05:33.835714  387237 cri.go:89] found id: ""
	I1027 23:05:33.835726  387237 logs.go:282] 1 containers: [195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf]
	I1027 23:05:33.835792  387237 ssh_runner.go:195] Run: which crictl
	I1027 23:05:33.840923  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 23:05:33.840998  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 23:05:33.881942  387237 cri.go:89] found id: ""
	I1027 23:05:33.881976  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.881984  387237 logs.go:284] No container was found matching "kube-proxy"
	I1027 23:05:33.881993  387237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 23:05:33.882054  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 23:05:33.922601  387237 cri.go:89] found id: "8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55"
	I1027 23:05:33.922634  387237 cri.go:89] found id: ""
	I1027 23:05:33.922644  387237 logs.go:282] 1 containers: [8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55]
	I1027 23:05:33.922705  387237 ssh_runner.go:195] Run: which crictl
	I1027 23:05:33.928300  387237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 23:05:33.928386  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 23:05:33.968478  387237 cri.go:89] found id: ""
	I1027 23:05:33.968515  387237 logs.go:282] 0 containers: []
	W1027 23:05:33.968525  387237 logs.go:284] No container was found matching "kindnet"
	I1027 23:05:33.968533  387237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 23:05:33.968607  387237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 23:05:34.009588  387237 cri.go:89] found id: ""
	I1027 23:05:34.009627  387237 logs.go:282] 0 containers: []
	W1027 23:05:34.009638  387237 logs.go:284] No container was found matching "storage-provisioner"
	I1027 23:05:34.009653  387237 logs.go:123] Gathering logs for kubelet ...
	I1027 23:05:34.009671  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 23:05:34.124620  387237 logs.go:123] Gathering logs for dmesg ...
	I1027 23:05:34.124667  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 23:05:34.143333  387237 logs.go:123] Gathering logs for describe nodes ...
	I1027 23:05:34.143377  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 23:05:34.222766  387237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 23:05:34.222794  387237 logs.go:123] Gathering logs for kube-scheduler [195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf] ...
	I1027 23:05:34.222810  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf"
	I1027 23:05:34.292301  387237 logs.go:123] Gathering logs for kube-controller-manager [8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55] ...
	I1027 23:05:34.292349  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55"
	I1027 23:05:34.339527  387237 logs.go:123] Gathering logs for CRI-O ...
	I1027 23:05:34.339560  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 23:05:34.542379  387237 logs.go:123] Gathering logs for container status ...
	I1027 23:05:34.542428  387237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 23:05:34.591395  387237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	W1027 23:05:34.591529  387237 out.go:285] * 
	W1027 23:05:34.591640  387237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W1027 23:05:34.591665  387237 out.go:285] * 
	W1027 23:05:34.593550  387237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:05:34.596755  387237 out.go:203] 
	W1027 23:05:34.598076  387237 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.503607147s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.85:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 2.57478969s
	[control-plane-check] kube-scheduler is healthy after 2.92004853s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000287406s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.61.85:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.61.85:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	
	W1027 23:05:34.598103  387237 out.go:285] * 
	I1027 23:05:34.599578  387237 out.go:203] 
	
	
	==> CRI-O <==
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.710713797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761606335710688362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac69879e-dc83-4901-8c82-f83893499726 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.711426240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1a531f7-f2d1-4a7b-a942-d4a3ff15c407 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.711480140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1a531f7-f2d1-4a7b-a942-d4a3ff15c407 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.711565989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55,PodSandboxId:436ac55d5db49875701ce765a6606835391e89d8a058aeabd65c6e6c58ed2dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761606243417680001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb366827ab7b32d13cb327d8b8d99103,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 17,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf,PodSandboxId:59daf734d33dec0088e832a98167793814a6c4c26a7ac4700b3409b6a02ba7b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761606094156534907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4949aab1d2885e95c9ca3a2ce576786,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1a531f7-f2d1-4a7b-a942-d4a3ff15c407 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.749511438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=384b6552-160b-4c2c-84c2-06608e28d60d name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.749603526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=384b6552-160b-4c2c-84c2-06608e28d60d name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.752456691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a3d9442-8050-497f-9ebf-3e4b3af98d27 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.753596054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761606335753500342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a3d9442-8050-497f-9ebf-3e4b3af98d27 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.754291191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8428e6b-a262-4fc1-ba3b-c0237885595d name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.754344802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8428e6b-a262-4fc1-ba3b-c0237885595d name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.754419566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55,PodSandboxId:436ac55d5db49875701ce765a6606835391e89d8a058aeabd65c6e6c58ed2dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761606243417680001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb366827ab7b32d13cb327d8b8d99103,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 17,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf,PodSandboxId:59daf734d33dec0088e832a98167793814a6c4c26a7ac4700b3409b6a02ba7b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761606094156534907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4949aab1d2885e95c9ca3a2ce576786,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8428e6b-a262-4fc1-ba3b-c0237885595d name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.799892873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8aca6bd3-1b74-40f7-948a-deaeb6da3baf name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.799999176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8aca6bd3-1b74-40f7-948a-deaeb6da3baf name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.801446159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c2f2afa-0a28-485a-8af8-c07d919fd487 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.801979280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761606335801950565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c2f2afa-0a28-485a-8af8-c07d919fd487 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.802666071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b886e2c2-05de-42ab-a17a-27f8cb0bf713 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.802723262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b886e2c2-05de-42ab-a17a-27f8cb0bf713 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.802802870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55,PodSandboxId:436ac55d5db49875701ce765a6606835391e89d8a058aeabd65c6e6c58ed2dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761606243417680001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb366827ab7b32d13cb327d8b8d99103,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 17,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf,PodSandboxId:59daf734d33dec0088e832a98167793814a6c4c26a7ac4700b3409b6a02ba7b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761606094156534907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4949aab1d2885e95c9ca3a2ce576786,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b886e2c2-05de-42ab-a17a-27f8cb0bf713 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.840173632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acfe17a2-5871-4373-9e85-13855e3bc53b name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.840265122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acfe17a2-5871-4373-9e85-13855e3bc53b name=/runtime.v1.RuntimeService/Version
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.841648093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae6a50a8-4721-4fad-9ee1-fce0a54e8c91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.842649118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761606335842623767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae6a50a8-4721-4fad-9ee1-fce0a54e8c91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.843439360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baaaa9fb-d9cf-4405-aa21-7876f18f0c0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.843695641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baaaa9fb-d9cf-4405-aa21-7876f18f0c0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 23:05:35 kubernetes-upgrade-216520 crio[2318]: time="2025-10-27 23:05:35.843927176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55,PodSandboxId:436ac55d5db49875701ce765a6606835391e89d8a058aeabd65c6e6c58ed2dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:17,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761606243417680001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb366827ab7b32d13cb327d8b8d99103,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 17,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf,PodSandboxId:59daf734d33dec0088e832a98167793814a6c4c26a7ac4700b3409b6a02ba7b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761606094156534907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-216520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
c4949aab1d2885e95c9ca3a2ce576786,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baaaa9fb-d9cf-4405-aa21-7876f18f0c0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f3420cdc4209       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Exited              kube-controller-manager   17                  436ac55d5db49       kube-controller-manager-kubernetes-upgrade-216520
	195a5236ccaee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   4 minutes ago        Running             kube-scheduler            4                   59daf734d33de       kube-scheduler-kubernetes-upgrade-216520
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001531] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.980802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.093857] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.118733] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.119238] kauditd_printk_skb: 199 callbacks suppressed
	[  +3.381967] kauditd_printk_skb: 224 callbacks suppressed
	[Oct27 22:53] kauditd_printk_skb: 150 callbacks suppressed
	[ +12.550808] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 22:54] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:55] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 22:57] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.352192] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.616650] kauditd_printk_skb: 80 callbacks suppressed
	[ +12.122781] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 22:58] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.422982] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 22:59] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 23:01] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.465155] kauditd_printk_skb: 108 callbacks suppressed
	[ +12.353785] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 23:02] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 23:03] kauditd_printk_skb: 5 callbacks suppressed
	[Oct27 23:04] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> kernel <==
	 23:05:36 up 14 min,  0 users,  load average: 0.09, 0.11, 0.09
	Linux kubernetes-upgrade-216520 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Oct 25 21:00:46 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-controller-manager [8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55] <==
	I1027 23:04:03.958914       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:04:04.303465       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1027 23:04:04.304089       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:04:04.306951       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 23:04:04.307143       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1027 23:04:04.307363       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1027 23:04:04.307487       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 23:04:14.309087       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.61.85:8443/healthz\": dial tcp 192.168.61.85:8443: connect: connection refused"
	
	
	==> kube-scheduler [195a5236ccaee598753f72a84a56b561ec17e05af1652f270f6824a45576d8cf] <==
	E1027 23:04:41.686087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.61.85:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:04:45.452238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.61.85:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:04:46.461957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.85:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 23:04:49.391549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.61.85:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:04:50.561616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:04:51.670040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.61.85:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:04:52.304197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.61.85:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:04:52.585089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.61.85:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:04:54.779919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.61.85:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 23:04:57.074007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:05:04.237417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.61.85:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:05:09.734239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.61.85:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:05:17.791769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.61.85:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:05:18.870700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.85:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 23:05:21.429774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:05:21.581036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 23:05:22.717281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.61.85:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:05:23.069940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.61.85:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:05:27.368715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.61.85:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:05:28.716951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.61.85:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:05:30.375372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.61.85:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 23:05:30.954565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.61.85:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:05:33.620684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:05:35.701648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.61.85:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:05:35.750261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.61.85:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	
	
	==> kubelet <==
	Oct 27 23:05:19 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:19.416147    9405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1\\\" is already in use by c5e3bb3488d8ebc8ca7396b4bbb5cc01a01b151f0cf3c36d05f5d97247fa1e4d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-216520" podUID="0b2f7b30e945705567d89722fabeeb58"
	Oct 27 23:05:22 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:22.220412    9405 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.85:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 27 23:05:23 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:23.028343    9405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.61.85:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-216520?timeout=10s\": dial tcp 192.168.61.85:8443: connect: connection refused" interval="7s"
	Oct 27 23:05:23 kubernetes-upgrade-216520 kubelet[9405]: I1027 23:05:23.254189    9405 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-216520"
	Oct 27 23:05:23 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:23.254540    9405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.61.85:8443/api/v1/nodes\": dial tcp 192.168.61.85:8443: connect: connection refused" node="kubernetes-upgrade-216520"
	Oct 27 23:05:23 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:23.505917    9405 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761606323505593773  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 23:05:23 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:23.505939    9405 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761606323505593773  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 23:05:24 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:24.405891    9405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-216520\" not found" node="kubernetes-upgrade-216520"
	Oct 27 23:05:24 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:24.414160    9405 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1\" is already in use by d38816b64716d174891be1c4a92a57a8ac1c934320365c42f89f8d65f14593b1. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="992d0518663d98d87502e3d3d077c7231d26dd3fadfd11660c3f9f2d604cba05"
	Oct 27 23:05:24 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:24.414277    9405 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-apiserver start failed in pod kube-apiserver-kubernetes-upgrade-216520_kube-system(fe1eaa028fcec4b5ffbcd2010eb65da7): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1\" is already in use by d38816b64716d174891be1c4a92a57a8ac1c934320365c42f89f8d65f14593b1. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 27 23:05:24 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:24.414342    9405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-216520_kube-system_fe1eaa028fcec4b5ffbcd2010eb65da7_1\\\" is already in use by d38816b64716d174891be1c4a92a57a8ac1c934320365c42f89f8d65f14593b1. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-216520" podUID="fe1eaa028fcec4b5ffbcd2010eb65da7"
	Oct 27 23:05:24 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:24.939656    9405 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.61.85:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.85:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 27 23:05:26 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:26.405800    9405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-216520\" not found" node="kubernetes-upgrade-216520"
	Oct 27 23:05:26 kubernetes-upgrade-216520 kubelet[9405]: I1027 23:05:26.405948    9405 scope.go:117] "RemoveContainer" containerID="8f3420cdc420906b058c468d7a84922a43ff5b906e897414a728d6b1dbcabb55"
	Oct 27 23:05:26 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:26.406096    9405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-216520_kube-system(fb366827ab7b32d13cb327d8b8d99103)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-216520" podUID="fb366827ab7b32d13cb327d8b8d99103"
	Oct 27 23:05:29 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:29.319455    9405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.61.85:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.85:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-216520.18727b68994fe737  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-216520,UID:kubernetes-upgrade-216520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-216520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-216520,},FirstTimestamp:2025-10-27 23:01:33.434251063 +0000 UTC m=+1.317312702,LastTimestamp:2025-10-27 23:01:33.434251063 +0000 UTC m=+1.317312702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kub
elet,ReportingInstance:kubernetes-upgrade-216520,}"
	Oct 27 23:05:30 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:30.029371    9405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.61.85:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-216520?timeout=10s\": dial tcp 192.168.61.85:8443: connect: connection refused" interval="7s"
	Oct 27 23:05:30 kubernetes-upgrade-216520 kubelet[9405]: I1027 23:05:30.256587    9405 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-216520"
	Oct 27 23:05:30 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:30.257055    9405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.61.85:8443/api/v1/nodes\": dial tcp 192.168.61.85:8443: connect: connection refused" node="kubernetes-upgrade-216520"
	Oct 27 23:05:33 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:33.508315    9405 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761606333508015190  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 23:05:33 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:33.508336    9405 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761606333508015190  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 23:05:34 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:34.406068    9405 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-216520\" not found" node="kubernetes-upgrade-216520"
	Oct 27 23:05:34 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:34.414094    9405 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1\" is already in use by c5e3bb3488d8ebc8ca7396b4bbb5cc01a01b151f0cf3c36d05f5d97247fa1e4d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="e655944f420f2b4da858cd490f8021a1f01adb24fa588a4fc356aeca526f527c"
	Oct 27 23:05:34 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:34.414214    9405 kuberuntime_manager.go:1449] "Unhandled Error" err="container etcd start failed in pod etcd-kubernetes-upgrade-216520_kube-system(0b2f7b30e945705567d89722fabeeb58): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1\" is already in use by c5e3bb3488d8ebc8ca7396b4bbb5cc01a01b151f0cf3c36d05f5d97247fa1e4d. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 27 23:05:34 kubernetes-upgrade-216520 kubelet[9405]: E1027 23:05:34.414248    9405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-216520_kube-system_0b2f7b30e945705567d89722fabeeb58_1\\\" is already in use by c5e3bb3488d8ebc8ca7396b4bbb5cc01a01b151f0cf3c36d05f5d97247fa1e4d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-216520" podUID="0b2f7b30e945705567d89722fabeeb58"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-216520 -n kubernetes-upgrade-216520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-216520 -n kubernetes-upgrade-216520: exit status 2 (220.153198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-216520" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-216520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-216520
--- FAIL: TestKubernetesUpgrade (935.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-135059 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-135059 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.40636104s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-135059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-135059" primary control-plane node in "pause-135059" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-135059" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:47:49.538129  382554 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:47:49.538534  382554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:47:49.538544  382554 out.go:374] Setting ErrFile to fd 2...
	I1027 22:47:49.538550  382554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:47:49.538973  382554 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:47:49.539587  382554 out.go:368] Setting JSON to false
	I1027 22:47:49.541015  382554 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9017,"bootTime":1761596253,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:47:49.541145  382554 start.go:143] virtualization: kvm guest
	I1027 22:47:49.543522  382554 out.go:179] * [pause-135059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:47:49.545275  382554 notify.go:221] Checking for updates...
	I1027 22:47:49.545377  382554 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:47:49.547655  382554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:47:49.549369  382554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:47:49.550808  382554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:47:49.552253  382554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:47:49.553787  382554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:47:49.555701  382554 config.go:182] Loaded profile config "pause-135059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:47:49.556225  382554 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:47:49.606056  382554 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 22:47:49.607578  382554 start.go:307] selected driver: kvm2
	I1027 22:47:49.607643  382554 start.go:928] validating driver "kvm2" against &{Name:pause-135059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-135059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:47:49.607910  382554 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:47:49.610207  382554 cni.go:84] Creating CNI manager for ""
	I1027 22:47:49.610351  382554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:47:49.610469  382554 start.go:351] cluster config:
	{Name:pause-135059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-135059 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:47:49.610695  382554 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:47:49.612660  382554 out.go:179] * Starting "pause-135059" primary control-plane node in "pause-135059" cluster
	I1027 22:47:49.614050  382554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:47:49.614133  382554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:47:49.614147  382554 cache.go:59] Caching tarball of preloaded images
	I1027 22:47:49.614300  382554 preload.go:233] Found /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:47:49.614317  382554 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:47:49.614473  382554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/config.json ...
	I1027 22:47:49.614807  382554 start.go:360] acquireMachinesLock for pause-135059: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 22:47:56.003883  382554 start.go:364] duration metric: took 6.389005726s to acquireMachinesLock for "pause-135059"
	I1027 22:47:56.003970  382554 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:47:56.003978  382554 fix.go:55] fixHost starting: 
	I1027 22:47:56.006859  382554 fix.go:113] recreateIfNeeded on pause-135059: state=Running err=<nil>
	W1027 22:47:56.006914  382554 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:47:56.009429  382554 out.go:252] * Updating the running kvm2 "pause-135059" VM ...
	I1027 22:47:56.009484  382554 machine.go:94] provisionDockerMachine start ...
	I1027 22:47:56.013565  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.014286  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.014354  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.014680  382554 main.go:143] libmachine: Using SSH client type: native
	I1027 22:47:56.015003  382554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1027 22:47:56.015025  382554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:47:56.147126  382554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-135059
	
	I1027 22:47:56.147157  382554 buildroot.go:166] provisioning hostname "pause-135059"
	I1027 22:47:56.150846  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.151476  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.151513  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.151800  382554 main.go:143] libmachine: Using SSH client type: native
	I1027 22:47:56.152138  382554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1027 22:47:56.152160  382554 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-135059 && echo "pause-135059" | sudo tee /etc/hostname
	I1027 22:47:56.313469  382554 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-135059
	
	I1027 22:47:56.317625  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.318369  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.318432  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.318713  382554 main.go:143] libmachine: Using SSH client type: native
	I1027 22:47:56.319026  382554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1027 22:47:56.319054  382554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-135059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-135059/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-135059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:47:56.452075  382554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:47:56.452111  382554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21790-352679/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-352679/.minikube}
	I1027 22:47:56.452162  382554 buildroot.go:174] setting up certificates
	I1027 22:47:56.452176  382554 provision.go:84] configureAuth start
	I1027 22:47:56.456203  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.456862  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.456928  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.460349  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.460922  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.460963  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.461158  382554 provision.go:143] copyHostCerts
	I1027 22:47:56.461225  382554 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem, removing ...
	I1027 22:47:56.461244  382554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem
	I1027 22:47:56.461301  382554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/cert.pem (1123 bytes)
	I1027 22:47:56.461409  382554 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem, removing ...
	I1027 22:47:56.461425  382554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem
	I1027 22:47:56.461462  382554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/key.pem (1675 bytes)
	I1027 22:47:56.461599  382554 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem, removing ...
	I1027 22:47:56.461611  382554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem
	I1027 22:47:56.461635  382554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-352679/.minikube/ca.pem (1082 bytes)
	I1027 22:47:56.461704  382554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem org=jenkins.pause-135059 san=[127.0.0.1 192.168.50.114 localhost minikube pause-135059]
	I1027 22:47:56.703816  382554 provision.go:177] copyRemoteCerts
	I1027 22:47:56.703917  382554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:47:56.708186  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.708731  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.708799  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.709028  382554 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/pause-135059/id_rsa Username:docker}
	I1027 22:47:56.811572  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:47:56.859265  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 22:47:56.905630  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:47:56.953465  382554 provision.go:87] duration metric: took 501.263087ms to configureAuth
	I1027 22:47:56.953504  382554 buildroot.go:189] setting minikube options for container-runtime
	I1027 22:47:56.953844  382554 config.go:182] Loaded profile config "pause-135059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:47:56.957775  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.958296  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:47:56.958325  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:47:56.958636  382554 main.go:143] libmachine: Using SSH client type: native
	I1027 22:47:56.958937  382554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1027 22:47:56.958963  382554 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:48:02.670103  382554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:48:02.670140  382554 machine.go:97] duration metric: took 6.660641173s to provisionDockerMachine
	I1027 22:48:02.670155  382554 start.go:293] postStartSetup for "pause-135059" (driver="kvm2")
	I1027 22:48:02.670170  382554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:48:02.670280  382554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:48:02.674303  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.674827  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:02.674904  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.675097  382554 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/pause-135059/id_rsa Username:docker}
	I1027 22:48:02.767056  382554 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:48:02.772860  382554 info.go:137] Remote host: Buildroot 2025.02
	I1027 22:48:02.772920  382554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/addons for local assets ...
	I1027 22:48:02.773014  382554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-352679/.minikube/files for local assets ...
	I1027 22:48:02.773118  382554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem -> 3566212.pem in /etc/ssl/certs
	I1027 22:48:02.773242  382554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:48:02.787566  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:48:02.827301  382554 start.go:296] duration metric: took 157.127619ms for postStartSetup
	I1027 22:48:02.827352  382554 fix.go:57] duration metric: took 6.823374721s for fixHost
	I1027 22:48:02.830568  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.831037  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:02.831065  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.831290  382554 main.go:143] libmachine: Using SSH client type: native
	I1027 22:48:02.831506  382554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1027 22:48:02.831516  382554 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1027 22:48:02.953874  382554 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761605282.946157224
	
	I1027 22:48:02.953919  382554 fix.go:217] guest clock: 1761605282.946157224
	I1027 22:48:02.953931  382554 fix.go:230] Guest: 2025-10-27 22:48:02.946157224 +0000 UTC Remote: 2025-10-27 22:48:02.827356678 +0000 UTC m=+13.365647445 (delta=118.800546ms)
	I1027 22:48:02.953956  382554 fix.go:201] guest clock delta is within tolerance: 118.800546ms
	I1027 22:48:02.953964  382554 start.go:83] releasing machines lock for "pause-135059", held for 6.950041076s
	I1027 22:48:02.958002  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.958453  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:02.958488  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.959287  382554 ssh_runner.go:195] Run: cat /version.json
	I1027 22:48:02.959434  382554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:48:02.963577  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.963849  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.964121  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:02.964159  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.964343  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:02.964373  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:02.964365  382554 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/pause-135059/id_rsa Username:docker}
	I1027 22:48:02.964702  382554 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/pause-135059/id_rsa Username:docker}
	I1027 22:48:03.057529  382554 ssh_runner.go:195] Run: systemctl --version
	I1027 22:48:03.083364  382554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:48:03.249738  382554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:48:03.262829  382554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:48:03.262975  382554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:48:03.280651  382554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:48:03.280685  382554 start.go:496] detecting cgroup driver to use...
	I1027 22:48:03.280758  382554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:48:03.310257  382554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:48:03.330187  382554 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:48:03.330270  382554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:48:03.357536  382554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:48:03.379408  382554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:48:03.601574  382554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:48:03.824821  382554 docker.go:234] disabling docker service ...
	I1027 22:48:03.825004  382554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:48:03.870464  382554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:48:03.898107  382554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:48:04.168855  382554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:48:04.378166  382554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:48:04.398734  382554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:48:04.427428  382554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:48:04.427500  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.444836  382554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:48:04.444961  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.461117  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.476368  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.492154  382554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:48:04.514452  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.533067  382554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.550829  382554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:48:04.570458  382554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:48:04.587899  382554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:48:04.603634  382554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:48:04.787075  382554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:48:07.754862  382554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.967733103s)
	I1027 22:48:07.754963  382554 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:48:07.755031  382554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:48:07.764100  382554 start.go:564] Will wait 60s for crictl version
	I1027 22:48:07.764187  382554 ssh_runner.go:195] Run: which crictl
	I1027 22:48:07.770554  382554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 22:48:07.817754  382554 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 22:48:07.817908  382554 ssh_runner.go:195] Run: crio --version
	I1027 22:48:07.858047  382554 ssh_runner.go:195] Run: crio --version
	I1027 22:48:07.896709  382554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 22:48:07.900833  382554 main.go:143] libmachine: domain pause-135059 has defined MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:07.901304  382554 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:ba:1a", ip: ""} in network mk-pause-135059: {Iface:virbr2 ExpiryTime:2025-10-27 23:46:40 +0000 UTC Type:0 Mac:52:54:00:a6:ba:1a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-135059 Clientid:01:52:54:00:a6:ba:1a}
	I1027 22:48:07.901344  382554 main.go:143] libmachine: domain pause-135059 has defined IP address 192.168.50.114 and MAC address 52:54:00:a6:ba:1a in network mk-pause-135059
	I1027 22:48:07.901534  382554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1027 22:48:07.907177  382554 kubeadm.go:884] updating cluster {Name:pause-135059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-135059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:48:07.907419  382554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:48:07.907509  382554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:48:07.960205  382554 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:48:07.960230  382554 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:48:07.960286  382554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:48:08.004333  382554 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:48:08.004389  382554 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:48:08.004402  382554 kubeadm.go:935] updating node { 192.168.50.114 8443 v1.34.1 crio true true} ...
	I1027 22:48:08.004557  382554 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-135059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-135059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:48:08.004637  382554 ssh_runner.go:195] Run: crio config
	I1027 22:48:08.057556  382554 cni.go:84] Creating CNI manager for ""
	I1027 22:48:08.057583  382554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:48:08.057611  382554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:48:08.057635  382554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-135059 NodeName:pause-135059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:48:08.057755  382554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-135059"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:48:08.057823  382554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:48:08.071811  382554 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:48:08.071904  382554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:48:08.086862  382554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1027 22:48:08.111838  382554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:48:08.136691  382554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 22:48:08.161494  382554 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I1027 22:48:08.166490  382554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:48:08.355641  382554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:48:08.375295  382554 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059 for IP: 192.168.50.114
	I1027 22:48:08.375324  382554 certs.go:195] generating shared ca certs ...
	I1027 22:48:08.375346  382554 certs.go:227] acquiring lock for ca certs: {Name:mk64cd4e3986ee7cc99471fffd6acf1db89a5b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:48:08.375548  382554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key
	I1027 22:48:08.375617  382554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key
	I1027 22:48:08.375634  382554 certs.go:257] generating profile certs ...
	I1027 22:48:08.375753  382554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.key
	I1027 22:48:08.375825  382554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/apiserver.key.ef457745
	I1027 22:48:08.375879  382554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/proxy-client.key
	I1027 22:48:08.376048  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem (1338 bytes)
	W1027 22:48:08.376100  382554 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621_empty.pem, impossibly tiny 0 bytes
	I1027 22:48:08.376116  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:48:08.376160  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:48:08.376195  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:48:08.376244  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/certs/key.pem (1675 bytes)
	I1027 22:48:08.376314  382554 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem (1708 bytes)
	I1027 22:48:08.377107  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:48:08.414133  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:48:08.455227  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:48:08.492820  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 22:48:08.532590  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:48:08.576909  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:48:08.613685  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:48:08.658028  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:48:08.699260  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/ssl/certs/3566212.pem --> /usr/share/ca-certificates/3566212.pem (1708 bytes)
	I1027 22:48:08.741882  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:48:08.780126  382554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-352679/.minikube/certs/356621.pem --> /usr/share/ca-certificates/356621.pem (1338 bytes)
	I1027 22:48:08.819002  382554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:48:08.847926  382554 ssh_runner.go:195] Run: openssl version
	I1027 22:48:08.855872  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356621.pem && ln -fs /usr/share/ca-certificates/356621.pem /etc/ssl/certs/356621.pem"
	I1027 22:48:08.885423  382554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356621.pem
	I1027 22:48:08.921246  382554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 21:58 /usr/share/ca-certificates/356621.pem
	I1027 22:48:08.921327  382554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356621.pem
	I1027 22:48:08.963673  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356621.pem /etc/ssl/certs/51391683.0"
	I1027 22:48:08.997455  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3566212.pem && ln -fs /usr/share/ca-certificates/3566212.pem /etc/ssl/certs/3566212.pem"
	I1027 22:48:09.032350  382554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3566212.pem
	I1027 22:48:09.042666  382554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 21:58 /usr/share/ca-certificates/3566212.pem
	I1027 22:48:09.042780  382554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3566212.pem
	I1027 22:48:09.064085  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3566212.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:48:09.100763  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:48:09.142421  382554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:48:09.160521  382554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:48:09.160611  382554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:48:09.182406  382554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:48:09.240535  382554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:48:09.255148  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:48:09.273963  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:48:09.295176  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:48:09.314085  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:48:09.337909  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:48:09.372922  382554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:48:09.399502  382554 kubeadm.go:401] StartCluster: {Name:pause-135059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-135059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:48:09.399710  382554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:48:09.399813  382554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:48:09.540216  382554 cri.go:89] found id: "6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f"
	I1027 22:48:09.540247  382554 cri.go:89] found id: "314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0"
	I1027 22:48:09.540254  382554 cri.go:89] found id: "11970ad0cbbe8c73ef08e4f1e44ab5d62652fc84fbb3bfabbf6b39bcee9986e8"
	I1027 22:48:09.540259  382554 cri.go:89] found id: "734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc"
	I1027 22:48:09.540264  382554 cri.go:89] found id: "a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d"
	I1027 22:48:09.540269  382554 cri.go:89] found id: "b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02"
	I1027 22:48:09.540274  382554 cri.go:89] found id: ""
	I1027 22:48:09.540335  382554 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-135059 -n pause-135059
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-135059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-135059 logs -n 25: (1.513506289s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-561731 sudo cat /etc/kubernetes/kubelet.conf                                                                                                      │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /var/lib/kubelet/config.yaml                                                                                                      │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status docker --all --full --no-pager                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat docker --no-pager                                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/docker/daemon.json                                                                                                           │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo docker system info                                                                                                                    │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cri-dockerd --version                                                                                                                 │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo containerd config dump                                                                                                                │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo crio config                                                                                                                           │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ delete  │ -p cilium-561731                                                                                                                                            │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:48 UTC │
	│ start   │ -p guest-734990 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-734990           │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-977671 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-977671 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ delete  │ -p running-upgrade-977671                                                                                                                                   │ running-upgrade-977671 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:48 UTC │
	│ start   │ -p cert-expiration-858253 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-858253 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:48:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:48:30.553396  384901 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:48:30.553656  384901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:30.553660  384901 out.go:374] Setting ErrFile to fd 2...
	I1027 22:48:30.553663  384901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:30.553862  384901 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:48:30.554411  384901 out.go:368] Setting JSON to false
	I1027 22:48:30.555399  384901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9058,"bootTime":1761596253,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:48:30.555484  384901 start.go:143] virtualization: kvm guest
	I1027 22:48:30.557986  384901 out.go:179] * [cert-expiration-858253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:48:30.559749  384901 notify.go:221] Checking for updates...
	I1027 22:48:30.559765  384901 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:48:30.561241  384901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:48:30.563282  384901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:48:30.565200  384901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:48:30.566611  384901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:48:30.568227  384901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:48:30.570155  384901 config.go:182] Loaded profile config "NoKubernetes-830800": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 22:48:30.570248  384901 config.go:182] Loaded profile config "guest-734990": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 22:48:30.570340  384901 config.go:182] Loaded profile config "pause-135059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:48:30.570441  384901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:48:30.610020  384901 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 22:48:30.611496  384901 start.go:307] selected driver: kvm2
	I1027 22:48:30.611505  384901 start.go:928] validating driver "kvm2" against <nil>
	I1027 22:48:30.611527  384901 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:48:30.612341  384901 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:48:30.612550  384901 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:48:30.612572  384901 cni.go:84] Creating CNI manager for ""
	I1027 22:48:30.612615  384901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:48:30.612619  384901 start_flags.go:335] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 22:48:30.612655  384901 start.go:351] cluster config:
	{Name:cert-expiration-858253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-858253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:48:30.612756  384901 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:48:30.614556  384901 out.go:179] * Starting "cert-expiration-858253" primary control-plane node in "cert-expiration-858253" cluster
	I1027 22:48:27.973597  382737 main.go:143] libmachine: domain NoKubernetes-830800 has defined MAC address 52:54:00:3d:f3:ad in network mk-NoKubernetes-830800
	I1027 22:48:27.974500  382737 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-830800 (source=lease)
	I1027 22:48:27.974522  382737 main.go:143] libmachine: trying to list again with source=arp
	I1027 22:48:27.974999  382737 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-830800 in network mk-NoKubernetes-830800 (interfaces detected: [])
	I1027 22:48:27.975050  382737 retry.go:31] will retry after 3.532808478s: waiting for domain to come up
	I1027 22:48:31.509207  382737 main.go:143] libmachine: domain NoKubernetes-830800 has defined MAC address 52:54:00:3d:f3:ad in network mk-NoKubernetes-830800
	I1027 22:48:31.509832  382737 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-830800 (source=lease)
	I1027 22:48:31.509870  382737 main.go:143] libmachine: trying to list again with source=arp
	I1027 22:48:31.510225  382737 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-830800 in network mk-NoKubernetes-830800 (interfaces detected: [])
	I1027 22:48:31.510260  382737 retry.go:31] will retry after 3.314493339s: waiting for domain to come up
	W1027 22:48:31.276251  382554 pod_ready.go:104] pod "kube-controller-manager-pause-135059" is not "Ready", error: <nil>
	I1027 22:48:32.776398  382554 pod_ready.go:94] pod "kube-controller-manager-pause-135059" is "Ready"
	I1027 22:48:32.776434  382554 pod_ready.go:86] duration metric: took 8.008507162s for pod "kube-controller-manager-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.779373  382554 pod_ready.go:83] waiting for pod "kube-proxy-nsz84" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.784220  382554 pod_ready.go:94] pod "kube-proxy-nsz84" is "Ready"
	I1027 22:48:32.784248  382554 pod_ready.go:86] duration metric: took 4.843144ms for pod "kube-proxy-nsz84" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.787385  382554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.794353  382554 pod_ready.go:94] pod "kube-scheduler-pause-135059" is "Ready"
	I1027 22:48:32.794394  382554 pod_ready.go:86] duration metric: took 6.969798ms for pod "kube-scheduler-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.794410  382554 pod_ready.go:40] duration metric: took 15.085025546s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:48:32.843547  382554 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:48:32.846209  382554 out.go:179] * Done! kubectl is now configured to use "pause-135059" cluster and "default" namespace by default
	I1027 22:48:28.231487  384808 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1027 22:48:28.257917  384808 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1027 22:48:28.347681  384808 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1027 22:48:28.347882  384808 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/guest-734990/config.json ...
	I1027 22:48:28.347954  384808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/guest-734990/config.json: {Name:mk1a31915af0a770616e16129d798f1fd3af2a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:48:28.348139  384808 start.go:360] acquireMachinesLock for guest-734990: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.565273053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605313565250766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=476e0915-5ffd-4831-82be-098d08803a73 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.565901323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f269319-daf8-4ad3-98d1-fc500b365fae name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.566132076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f269319-daf8-4ad3-98d1-fc500b365fae name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.566901767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f269319-daf8-4ad3-98d1-fc500b365fae name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.616842293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=776ef7bc-4dd3-4df0-b0c4-cdf4e9dc53bd name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.616926045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=776ef7bc-4dd3-4df0-b0c4-cdf4e9dc53bd name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.618733385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bafd1148-37ca-4368-9f3e-406860987dbf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.619187807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605313619164474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bafd1148-37ca-4368-9f3e-406860987dbf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.620213545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14b7987d-ce5b-4ea4-9121-8bef487e9de8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.620310386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14b7987d-ce5b-4ea4-9121-8bef487e9de8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.620624643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14b7987d-ce5b-4ea4-9121-8bef487e9de8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.672255319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c6813c5-88de-4b75-9a62-30133e24d875 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.672745368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c6813c5-88de-4b75-9a62-30133e24d875 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.674434194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5437a091-65ac-4403-8766-e8220e2b1a5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.674944081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605313674921786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5437a091-65ac-4403-8766-e8220e2b1a5d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.675422031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdcc76fc-196b-47d6-a745-208cc9462f47 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.675476204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdcc76fc-196b-47d6-a745-208cc9462f47 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.675760363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdcc76fc-196b-47d6-a745-208cc9462f47 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.728951100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b79d1eef-bb4d-4e2c-92d2-c6c65bc4450d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.729029624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b79d1eef-bb4d-4e2c-92d2-c6c65bc4450d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.731009204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6af30368-c9c7-4147-aeb5-14d8bf3b7363 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.731392563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605313731370876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6af30368-c9c7-4147-aeb5-14d8bf3b7363 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.732059898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d295ca1c-535b-4f53-8ee6-9386424d16d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.732120337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d295ca1c-535b-4f53-8ee6-9386424d16d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:33 pause-135059 crio[2543]: time="2025-10-27 22:48:33.732370935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d295ca1c-535b-4f53-8ee6-9386424d16d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	565a0834bf3b2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago       Running             coredns                   1                   0c94c58c05fd8       coredns-66bc5c9577-njs4r
	78e03a2353b01       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   19 seconds ago       Running             kube-controller-manager   2                   d2bca64550852       kube-controller-manager-pause-135059
	eacbfc719980a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago       Running             kube-proxy                1                   74900fbc30ed6       kube-proxy-nsz84
	b8bd3fea75170       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago       Running             kube-apiserver            1                   134e0c53b205b       kube-apiserver-pause-135059
	af2097e92620e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   24 seconds ago       Running             etcd                      1                   fd86d45cf7571       etcd-pause-135059
	8aaf5e9497e2a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago       Running             kube-scheduler            1                   43b1e56450e4e       kube-scheduler-pause-135059
	d228777a0d0e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago       Exited              kube-controller-manager   1                   d2bca64550852       kube-controller-manager-pause-135059
	6d1f29a2e124f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   36075fe63e6e4       coredns-66bc5c9577-njs4r
	314708a637d72       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   7bf46d7b55a39       kube-proxy-nsz84
	734735ba00bcd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   e57b24170b419       kube-scheduler-pause-135059
	a584500f1479b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   4884affbd1010       kube-apiserver-pause-135059
	b414803bfd37f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   ad2e54d5a5f6c       etcd-pause-135059
	
	
	==> coredns [565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47502 - 6662 "HINFO IN 6331120275465665382.7655734933809026510. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.123586341s
	
	
	==> coredns [6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55645 - 42717 "HINFO IN 2493165542446951685.3773547996396777182. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057352806s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-135059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-135059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-135059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_47_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:47:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-135059
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:48:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    pause-135059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 f29c39f8ad89470cad4177f304789d6f
	  System UUID:                f29c39f8-ad89-470c-ad41-77f304789d6f
	  Boot ID:                    4d4a86a4-42d2-4f35-9e35-44686cb2d8ca
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-njs4r                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     80s
	  kube-system                 etcd-pause-135059                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-135059             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-135059    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-nsz84                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-135059             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     96s (x7 over 96s)  kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet          Node pause-135059 status is now: NodeReady
	  Normal  RegisteredNode           82s                node-controller  Node pause-135059 event: Registered Node pause-135059 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-135059 event: Registered Node pause-135059 in Controller
	
	
	==> dmesg <==
	[Oct27 22:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000134] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013929] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.206701] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089252] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.124394] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.107838] kauditd_printk_skb: 18 callbacks suppressed
	[Oct27 22:47] kauditd_printk_skb: 171 callbacks suppressed
	[  +2.634402] kauditd_printk_skb: 19 callbacks suppressed
	[ +32.414344] kauditd_printk_skb: 183 callbacks suppressed
	[Oct27 22:48] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.946510] kauditd_printk_skb: 240 callbacks suppressed
	
	
	==> etcd [af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f] <==
	{"level":"warn","ts":"2025-10-27T22:48:12.229295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.254192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.270158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.327274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.334170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.382216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.382391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.407537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.436086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.465622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.488420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.504410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.556732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.572546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.590314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.606107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.629695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.642561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.673854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.699396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.722864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.758483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.791436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.829247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:13.118103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34964","server-name":"","error":"EOF"}
	
	
	==> etcd [b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02] <==
	{"level":"info","ts":"2025-10-27T22:47:14.657779Z","caller":"traceutil/trace.go:172","msg":"trace[23494571] range","detail":"{range_begin:/registry/minions/pause-135059; range_end:; response_count:1; response_revision:362; }","duration":"723.309084ms","start":"2025-10-27T22:47:13.934459Z","end":"2025-10-27T22:47:14.657768Z","steps":["trace[23494571] 'agreement among raft nodes before linearized reading'  (duration: 712.632286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:14.660116Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T22:47:13.934445Z","time spent":"725.654942ms","remote":"127.0.0.1:52172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5303,"request content":"key:\"/registry/minions/pause-135059\" limit:1 "}
	{"level":"info","ts":"2025-10-27T22:47:14.715575Z","caller":"traceutil/trace.go:172","msg":"trace[575030198] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"528.480048ms","start":"2025-10-27T22:47:14.187078Z","end":"2025-10-27T22:47:14.715558Z","steps":["trace[575030198] 'process raft request'  (duration: 528.278979ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:14.715760Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T22:47:14.187059Z","time spent":"528.644164ms","remote":"127.0.0.1:52340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" mod_revision:14 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" > >"}
	{"level":"info","ts":"2025-10-27T22:47:35.486327Z","caller":"traceutil/trace.go:172","msg":"trace[717307423] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"203.964498ms","start":"2025-10-27T22:47:35.282349Z","end":"2025-10-27T22:47:35.486314Z","steps":["trace[717307423] 'process raft request'  (duration: 203.554534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:35.743212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.586911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:47:35.743256Z","caller":"traceutil/trace.go:172","msg":"trace[1916930527] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:425; }","duration":"121.639776ms","start":"2025-10-27T22:47:35.621608Z","end":"2025-10-27T22:47:35.743248Z","steps":["trace[1916930527] 'range keys from in-memory index tree'  (duration: 121.456469ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:47:57.140470Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T22:47:57.140543Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-135059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"]}
	{"level":"error","ts":"2025-10-27T22:47:57.140634Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:47:58.247210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:47:58.248941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.248982Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f0e2ae880f3a35e5","current-leader-member-id":"f0e2ae880f3a35e5"}
	{"level":"info","ts":"2025-10-27T22:47:58.249048Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T22:47:58.249061Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249130Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249203Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:47:58.249214Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249265Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249275Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.114:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:47:58.249283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.114:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.252959Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"error","ts":"2025-10-27T22:47:58.253057Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.114:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.253124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2025-10-27T22:47:58.253138Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-135059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"]}
	
	
	==> kernel <==
	 22:48:34 up 2 min,  0 users,  load average: 0.78, 0.40, 0.15
	Linux pause-135059 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Oct 25 21:00:46 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d] <==
	W1027 22:47:57.160285       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160347       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160449       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160519       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160575       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160628       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160779       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160852       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160908       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160974       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161017       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161059       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161102       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161182       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161236       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161279       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161321       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161361       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161576       1 logging.go:55] [core] [Channel #11 SubChannel #13]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161581       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161619       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162014       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162208       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162406       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.163813       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60] <==
	I1027 22:48:14.227203       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:48:14.242037       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:48:14.242090       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 22:48:14.242096       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 22:48:14.246219       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:48:14.252588       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:48:14.253935       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:48:14.261451       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:48:14.262063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:48:14.262143       1 policy_source.go:240] refreshing policies
	I1027 22:48:14.265189       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 22:48:14.265332       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1027 22:48:14.278432       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 22:48:14.295499       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:48:14.307877       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:48:14.324097       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:48:14.328371       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:48:15.072595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:48:15.205491       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:48:16.993230       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:48:17.080845       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:48:17.159457       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:48:17.189339       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:48:19.222820       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:48:19.275227       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f] <==
	I1027 22:48:19.055938       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:48:19.059006       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:48:19.062294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:48:19.065842       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:48:19.066850       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:48:19.067405       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:48:19.068128       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 22:48:19.069750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:48:19.069860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:48:19.069872       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:48:19.069879       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:48:19.070258       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:48:19.071108       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:48:19.074312       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:48:19.082623       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:48:19.084423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:48:19.098920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:48:19.103078       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:48:19.116737       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:48:19.116837       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:48:19.117396       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:48:19.117456       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-135059"
	I1027 22:48:19.117494       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:48:19.119016       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:48:19.119193       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2] <==
	
	
	==> kube-proxy [314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0] <==
	I1027 22:47:15.714437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:47:15.826058       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:47:15.826131       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.114"]
	E1027 22:47:15.826267       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:47:15.978280       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 22:47:15.978389       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 22:47:15.978435       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:47:15.992759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:47:15.993191       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:47:15.993237       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:47:15.997969       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:47:15.998022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:47:16.005583       1 config.go:200] "Starting service config controller"
	I1027 22:47:16.007269       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:47:16.006156       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:47:16.007900       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:47:16.006407       1 config.go:309] "Starting node config controller"
	I1027 22:47:16.007919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:47:16.007924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:47:16.099002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:47:16.107922       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:47:16.108163       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365] <==
	I1027 22:48:15.698610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:48:15.802107       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:48:15.802354       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.114"]
	E1027 22:48:15.802467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:48:15.908310       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 22:48:15.908480       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 22:48:15.908546       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:48:15.936308       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:48:15.936968       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:48:15.937116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:48:15.947918       1 config.go:200] "Starting service config controller"
	I1027 22:48:15.948092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:48:15.948151       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:48:15.948174       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:48:15.948204       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:48:15.948226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:48:15.951048       1 config.go:309] "Starting node config controller"
	I1027 22:48:15.951128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:48:15.951157       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:48:16.048286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:48:16.048343       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:48:16.048371       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc] <==
	E1027 22:47:04.813407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:47:04.816091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:47:04.820592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:47:04.822623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 22:47:04.811747       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1027 22:47:04.828280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:47:04.829381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:47:04.829746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:47:04.829835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:47:04.829871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:47:04.829896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:47:04.829932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:47:04.829943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:47:04.830015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:47:04.830050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:47:04.830075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:47:04.830195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:47:04.830231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:47:04.831108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:47:04.831580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1027 22:47:06.227450       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:47:57.141451       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 22:47:57.149525       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 22:47:57.149538       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 22:47:57.149552       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9] <==
	I1027 22:48:11.878233       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:48:14.099088       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:48:14.099154       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:48:14.099167       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:48:14.099182       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:48:14.213531       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:48:14.213579       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:48:14.223529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:48:14.224039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:48:14.224075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:48:14.224144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:48:14.324407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.308999    3399 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.312282    3399 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.341912    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-135059\" already exists" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.341946    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.368067    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-135059\" already exists" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.368758    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.384091    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-135059\" already exists" pod="kube-system/kube-controller-manager-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.384333    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.397591    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-135059\" already exists" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.416449    3399 scope.go:117] "RemoveContainer" containerID="d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.044720    3399 apiserver.go:52] "Watching apiserver"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.125333    3399 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.196759    3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb5cdc0-f13c-4cc0-ae45-993720272d35-lib-modules\") pod \"kube-proxy-nsz84\" (UID: \"5eb5cdc0-f13c-4cc0-ae45-993720272d35\") " pod="kube-system/kube-proxy-nsz84"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.196870    3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb5cdc0-f13c-4cc0-ae45-993720272d35-xtables-lock\") pod \"kube-proxy-nsz84\" (UID: \"5eb5cdc0-f13c-4cc0-ae45-993720272d35\") " pod="kube-system/kube-proxy-nsz84"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.352140    3399 scope.go:117] "RemoveContainer" containerID="314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.428371    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.433316    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.440159    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.627894    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-135059\" already exists" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.627894    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-135059\" already exists" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.671618    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-135059\" already exists" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:23 pause-135059 kubelet[3399]: E1027 22:48:23.395599    3399 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761605303395164871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:23 pause-135059 kubelet[3399]: E1027 22:48:23.395717    3399 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761605303395164871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:33 pause-135059 kubelet[3399]: E1027 22:48:33.403473    3399 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761605313397159815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:33 pause-135059 kubelet[3399]: E1027 22:48:33.403550    3399 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761605313397159815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-135059 -n pause-135059
helpers_test.go:269: (dbg) Run:  kubectl --context pause-135059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-135059 -n pause-135059
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-135059 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-135059 logs -n 25: (1.587080848s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-561731 sudo cat /etc/kubernetes/kubelet.conf                                                                                                      │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /var/lib/kubelet/config.yaml                                                                                                      │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status docker --all --full --no-pager                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat docker --no-pager                                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/docker/daemon.json                                                                                                           │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo docker system info                                                                                                                    │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cri-dockerd --version                                                                                                                 │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo containerd config dump                                                                                                                │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ ssh     │ -p cilium-561731 sudo crio config                                                                                                                           │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ delete  │ -p cilium-561731                                                                                                                                            │ cilium-561731          │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:48 UTC │
	│ start   │ -p guest-734990 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-734990           │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-977671 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-977671 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	│ delete  │ -p running-upgrade-977671                                                                                                                                   │ running-upgrade-977671 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │ 27 Oct 25 22:48 UTC │
	│ start   │ -p cert-expiration-858253 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-858253 │ jenkins │ v1.37.0 │ 27 Oct 25 22:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:48:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:48:30.553396  384901 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:48:30.553656  384901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:30.553660  384901 out.go:374] Setting ErrFile to fd 2...
	I1027 22:48:30.553663  384901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:30.553862  384901 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:48:30.554411  384901 out.go:368] Setting JSON to false
	I1027 22:48:30.555399  384901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9058,"bootTime":1761596253,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:48:30.555484  384901 start.go:143] virtualization: kvm guest
	I1027 22:48:30.557986  384901 out.go:179] * [cert-expiration-858253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:48:30.559749  384901 notify.go:221] Checking for updates...
	I1027 22:48:30.559765  384901 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:48:30.561241  384901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:48:30.563282  384901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:48:30.565200  384901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:48:30.566611  384901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:48:30.568227  384901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:48:30.570155  384901 config.go:182] Loaded profile config "NoKubernetes-830800": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 22:48:30.570248  384901 config.go:182] Loaded profile config "guest-734990": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 22:48:30.570340  384901 config.go:182] Loaded profile config "pause-135059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:48:30.570441  384901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:48:30.610020  384901 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 22:48:30.611496  384901 start.go:307] selected driver: kvm2
	I1027 22:48:30.611505  384901 start.go:928] validating driver "kvm2" against <nil>
	I1027 22:48:30.611527  384901 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:48:30.612341  384901 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:48:30.612550  384901 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:48:30.612572  384901 cni.go:84] Creating CNI manager for ""
	I1027 22:48:30.612615  384901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 22:48:30.612619  384901 start_flags.go:335] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 22:48:30.612655  384901 start.go:351] cluster config:
	{Name:cert-expiration-858253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-858253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:48:30.612756  384901 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:48:30.614556  384901 out.go:179] * Starting "cert-expiration-858253" primary control-plane node in "cert-expiration-858253" cluster
	I1027 22:48:27.973597  382737 main.go:143] libmachine: domain NoKubernetes-830800 has defined MAC address 52:54:00:3d:f3:ad in network mk-NoKubernetes-830800
	I1027 22:48:27.974500  382737 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-830800 (source=lease)
	I1027 22:48:27.974522  382737 main.go:143] libmachine: trying to list again with source=arp
	I1027 22:48:27.974999  382737 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-830800 in network mk-NoKubernetes-830800 (interfaces detected: [])
	I1027 22:48:27.975050  382737 retry.go:31] will retry after 3.532808478s: waiting for domain to come up
	I1027 22:48:31.509207  382737 main.go:143] libmachine: domain NoKubernetes-830800 has defined MAC address 52:54:00:3d:f3:ad in network mk-NoKubernetes-830800
	I1027 22:48:31.509832  382737 main.go:143] libmachine: no network interface addresses found for domain NoKubernetes-830800 (source=lease)
	I1027 22:48:31.509870  382737 main.go:143] libmachine: trying to list again with source=arp
	I1027 22:48:31.510225  382737 main.go:143] libmachine: unable to find current IP address of domain NoKubernetes-830800 in network mk-NoKubernetes-830800 (interfaces detected: [])
	I1027 22:48:31.510260  382737 retry.go:31] will retry after 3.314493339s: waiting for domain to come up
	W1027 22:48:31.276251  382554 pod_ready.go:104] pod "kube-controller-manager-pause-135059" is not "Ready", error: <nil>
	I1027 22:48:32.776398  382554 pod_ready.go:94] pod "kube-controller-manager-pause-135059" is "Ready"
	I1027 22:48:32.776434  382554 pod_ready.go:86] duration metric: took 8.008507162s for pod "kube-controller-manager-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.779373  382554 pod_ready.go:83] waiting for pod "kube-proxy-nsz84" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.784220  382554 pod_ready.go:94] pod "kube-proxy-nsz84" is "Ready"
	I1027 22:48:32.784248  382554 pod_ready.go:86] duration metric: took 4.843144ms for pod "kube-proxy-nsz84" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.787385  382554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.794353  382554 pod_ready.go:94] pod "kube-scheduler-pause-135059" is "Ready"
	I1027 22:48:32.794394  382554 pod_ready.go:86] duration metric: took 6.969798ms for pod "kube-scheduler-pause-135059" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:48:32.794410  382554 pod_ready.go:40] duration metric: took 15.085025546s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:48:32.843547  382554 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:48:32.846209  382554 out.go:179] * Done! kubectl is now configured to use "pause-135059" cluster and "default" namespace by default
	I1027 22:48:28.231487  384808 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1027 22:48:28.257917  384808 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1027 22:48:28.347681  384808 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1027 22:48:28.347882  384808 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/guest-734990/config.json ...
	I1027 22:48:28.347954  384808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/guest-734990/config.json: {Name:mk1a31915af0a770616e16129d798f1fd3af2a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:48:28.348139  384808 start.go:360] acquireMachinesLock for guest-734990: {Name:mka983f7fa498b8241736aecc4fdc7843c00414d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.641843429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605315641810705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44133559-e3c4-46da-b3ea-60cdc785810c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.642961719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c0aa78c-e2c1-42a1-96bc-6b3e44f70e05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.643057944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c0aa78c-e2c1-42a1-96bc-6b3e44f70e05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.643377932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c0aa78c-e2c1-42a1-96bc-6b3e44f70e05 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.697377850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99477e41-6300-4d33-96f9-aeaa6144d66d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.697594117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99477e41-6300-4d33-96f9-aeaa6144d66d name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.699358739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=def0f92e-d0ed-4bce-a178-a1d4dccf239e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.700048109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605315700024308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=def0f92e-d0ed-4bce-a178-a1d4dccf239e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.700713626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6a97e06-50b8-468b-a249-ea6335bc0eb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.700794382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6a97e06-50b8-468b-a249-ea6335bc0eb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.701330264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6a97e06-50b8-468b-a249-ea6335bc0eb4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.752180305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=927ef6e4-b04d-47d5-a844-6ced6e36ac87 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.752275756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=927ef6e4-b04d-47d5-a844-6ced6e36ac87 name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.755275152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd169918-53c4-4b62-b6d0-398d4234060c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.755858992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605315755826692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd169918-53c4-4b62-b6d0-398d4234060c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.756772497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43cf4af1-b832-40c4-b9e7-733cdd9f88a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.756878497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43cf4af1-b832-40c4-b9e7-733cdd9f88a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.757625175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43cf4af1-b832-40c4-b9e7-733cdd9f88a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.818027340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e99e154-2dc4-4c7d-850b-88b8a597768b name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.818126666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e99e154-2dc4-4c7d-850b-88b8a597768b name=/runtime.v1.RuntimeService/Version
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.820992483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3559f522-47c9-492a-b431-2ae90e872d3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.821397229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761605315821372642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3559f522-47c9-492a-b431-2ae90e872d3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.822329892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17ae544e-82b5-4774-b345-57e32a93351e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.822456502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17ae544e-82b5-4774-b345-57e32a93351e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 22:48:35 pause-135059 crio[2543]: time="2025-10-27 22:48:35.822872381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71,PodSandboxId:0c94c58c05fd84dcf58137de285985b7f3a27b2f342d8919577fd51334da53a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761605295470011456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40be64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761605294454915295,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365,PodSandboxId:74900fbc30ed6c24396b3d0883838629c829da6446ce7a2229aacf6f0612ce3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761605289778937674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-ae45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60,PodSandboxId:134e0c53b205b4507272d7b42c37b9f844c0f04aa00262e0242ea8a628014782,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761605289710678404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f,PodSandboxId:fd86d45cf7571ac49dd3131fdfcb9657996e13c0a7d612f5a96049aa1d805c44,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761605289706218017,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9,PodSandboxId:43b1e56450e4e77f688e68abfe3adbecdcd788f4c151052c3d5f7b160ea93856,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761605289470580819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2,PodSandboxId:d2bca645508523ec592b45e0e129598178498e3aa3d40b
e64db8a66b49ce046f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761605289410914011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa5f5e4447359c0241875e24b6211789,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f,PodSandboxId:36075fe63e6e4fb596316b932dab86a0154ed8826580a863a27020a474972390,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761605235725179561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-njs4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47176693-bbdf-4eeb-851c-0ff57481185a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0,PodSandboxId:7bf46d7b55a3909ee2f589ac2751c10124bf1556b6da388dc91adc57516b0f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761605235279207730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nsz84,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb5cdc0-f13c-4cc0-a
e45-993720272d35,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc,PodSandboxId:e57b24170b419ad81e39b92b749854124d38aeed3fc8c85b651ac684a54a314d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761605220847348477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341e452749629f418700e59efe97d0c2,},
Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02,PodSandboxId:ad2e54d5a5f6c1363501176d08e0cf25275b47888e3807802867efb97abef28a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761605220712375420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-135059,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 8123c8224b0a7792c84504967179aca8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d,PodSandboxId:4884affbd101098b9bed07ab31747f7359f3637f12c8822d3a5001d5abb96bff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761605220721762957,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-135059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27949c2c0320266ed5868f27a4b045aa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17ae544e-82b5-4774-b345-57e32a93351e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	565a0834bf3b2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago       Running             coredns                   1                   0c94c58c05fd8       coredns-66bc5c9577-njs4r
	78e03a2353b01       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago       Running             kube-controller-manager   2                   d2bca64550852       kube-controller-manager-pause-135059
	eacbfc719980a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago       Running             kube-proxy                1                   74900fbc30ed6       kube-proxy-nsz84
	b8bd3fea75170       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   26 seconds ago       Running             kube-apiserver            1                   134e0c53b205b       kube-apiserver-pause-135059
	af2097e92620e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   26 seconds ago       Running             etcd                      1                   fd86d45cf7571       etcd-pause-135059
	8aaf5e9497e2a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   26 seconds ago       Running             kube-scheduler            1                   43b1e56450e4e       kube-scheduler-pause-135059
	d228777a0d0e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   26 seconds ago       Exited              kube-controller-manager   1                   d2bca64550852       kube-controller-manager-pause-135059
	6d1f29a2e124f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   36075fe63e6e4       coredns-66bc5c9577-njs4r
	314708a637d72       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   7bf46d7b55a39       kube-proxy-nsz84
	734735ba00bcd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   e57b24170b419       kube-scheduler-pause-135059
	a584500f1479b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   4884affbd1010       kube-apiserver-pause-135059
	b414803bfd37f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   ad2e54d5a5f6c       etcd-pause-135059
	
	
	==> coredns [565a0834bf3b2b8706ccd28c24a5d9865f8f20ac9ffe6142402549891f268f71] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47502 - 6662 "HINFO IN 6331120275465665382.7655734933809026510. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.123586341s
	
	
	==> coredns [6d1f29a2e124fd6e2fa940839f68a108d0ae0f490f9bf469640efeacd680d45f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55645 - 42717 "HINFO IN 2493165542446951685.3773547996396777182. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057352806s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-135059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-135059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-135059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_47_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:47:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-135059
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:48:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:48:14 +0000   Mon, 27 Oct 2025 22:47:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    pause-135059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 f29c39f8ad89470cad4177f304789d6f
	  System UUID:                f29c39f8-ad89-470c-ad41-77f304789d6f
	  Boot ID:                    4d4a86a4-42d2-4f35-9e35-44686cb2d8ca
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-njs4r                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     82s
	  kube-system                 etcd-pause-135059                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-135059             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-135059    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-nsz84                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-135059             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                kubelet          Node pause-135059 status is now: NodeReady
	  Normal  RegisteredNode           84s                node-controller  Node pause-135059 event: Registered Node pause-135059 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-135059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-135059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-135059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-135059 event: Registered Node pause-135059 in Controller
	
	
	==> dmesg <==
	[Oct27 22:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000134] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013929] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.206701] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089252] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.124394] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.107838] kauditd_printk_skb: 18 callbacks suppressed
	[Oct27 22:47] kauditd_printk_skb: 171 callbacks suppressed
	[  +2.634402] kauditd_printk_skb: 19 callbacks suppressed
	[ +32.414344] kauditd_printk_skb: 183 callbacks suppressed
	[Oct27 22:48] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.946510] kauditd_printk_skb: 240 callbacks suppressed
	
	
	==> etcd [af2097e92620ed00b58419da7e652fe8c20248f232e8950060b1ba18da02576f] <==
	{"level":"warn","ts":"2025-10-27T22:48:12.229295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.254192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.270158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.327274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.334170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.382216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.382391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.407537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.436086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.465622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.488420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.504410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.556732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.572546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.590314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.606107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.629695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.642561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.673854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.699396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.722864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.758483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.791436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:12.829247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:48:13.118103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34964","server-name":"","error":"EOF"}
	
	
	==> etcd [b414803bfd37fad76a69957e8ae6f11d7f7a7fdaa1bac87d083415ad2dd48d02] <==
	{"level":"info","ts":"2025-10-27T22:47:14.657779Z","caller":"traceutil/trace.go:172","msg":"trace[23494571] range","detail":"{range_begin:/registry/minions/pause-135059; range_end:; response_count:1; response_revision:362; }","duration":"723.309084ms","start":"2025-10-27T22:47:13.934459Z","end":"2025-10-27T22:47:14.657768Z","steps":["trace[23494571] 'agreement among raft nodes before linearized reading'  (duration: 712.632286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:14.660116Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T22:47:13.934445Z","time spent":"725.654942ms","remote":"127.0.0.1:52172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5303,"request content":"key:\"/registry/minions/pause-135059\" limit:1 "}
	{"level":"info","ts":"2025-10-27T22:47:14.715575Z","caller":"traceutil/trace.go:172","msg":"trace[575030198] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"528.480048ms","start":"2025-10-27T22:47:14.187078Z","end":"2025-10-27T22:47:14.715558Z","steps":["trace[575030198] 'process raft request'  (duration: 528.278979ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:14.715760Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T22:47:14.187059Z","time spent":"528.644164ms","remote":"127.0.0.1:52340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" mod_revision:14 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-i5olmgd22aacl34fqifrgudzvu\" > >"}
	{"level":"info","ts":"2025-10-27T22:47:35.486327Z","caller":"traceutil/trace.go:172","msg":"trace[717307423] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"203.964498ms","start":"2025-10-27T22:47:35.282349Z","end":"2025-10-27T22:47:35.486314Z","steps":["trace[717307423] 'process raft request'  (duration: 203.554534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:47:35.743212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.586911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:47:35.743256Z","caller":"traceutil/trace.go:172","msg":"trace[1916930527] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:425; }","duration":"121.639776ms","start":"2025-10-27T22:47:35.621608Z","end":"2025-10-27T22:47:35.743248Z","steps":["trace[1916930527] 'range keys from in-memory index tree'  (duration: 121.456469ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:47:57.140470Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T22:47:57.140543Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-135059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"]}
	{"level":"error","ts":"2025-10-27T22:47:57.140634Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:47:58.247210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:47:58.248941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.248982Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f0e2ae880f3a35e5","current-leader-member-id":"f0e2ae880f3a35e5"}
	{"level":"info","ts":"2025-10-27T22:47:58.249048Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T22:47:58.249061Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249130Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249203Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:47:58.249214Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249265Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.114:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:47:58.249275Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.114:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:47:58.249283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.114:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.252959Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"error","ts":"2025-10-27T22:47:58.253057Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.114:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:47:58.253124Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2025-10-27T22:47:58.253138Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-135059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"]}
	
	
	==> kernel <==
	 22:48:36 up 2 min,  0 users,  load average: 0.78, 0.40, 0.15
	Linux pause-135059 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Oct 25 21:00:46 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a584500f1479bdb3155a59e60bd0ff31bcf02614b95124c21bf48fc02a18667d] <==
	W1027 22:47:57.160285       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160347       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160449       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160519       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160575       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160628       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160779       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160852       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160908       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.160974       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161017       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161059       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161102       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161182       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161236       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161279       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161321       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161361       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161576       1 logging.go:55] [core] [Channel #11 SubChannel #13]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161581       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.161619       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162014       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162208       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.162406       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 22:47:57.163813       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b8bd3fea7517008c61e2d4d1495c6b7c3d9344ee8329e99f8152b34fcc6e0b60] <==
	I1027 22:48:14.227203       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:48:14.242037       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:48:14.242090       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 22:48:14.242096       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 22:48:14.246219       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:48:14.252588       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:48:14.253935       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:48:14.261451       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:48:14.262063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:48:14.262143       1 policy_source.go:240] refreshing policies
	I1027 22:48:14.265189       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 22:48:14.265332       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1027 22:48:14.278432       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 22:48:14.295499       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:48:14.307877       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:48:14.324097       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:48:14.328371       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:48:15.072595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:48:15.205491       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:48:16.993230       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:48:17.080845       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:48:17.159457       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:48:17.189339       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:48:19.222820       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:48:19.275227       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [78e03a2353b01a80aae1d92255ee746cdb6ed8e92a926b49b543ca7d80a2609f] <==
	I1027 22:48:19.055938       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:48:19.059006       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:48:19.062294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:48:19.065842       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:48:19.066850       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:48:19.067405       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:48:19.068128       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 22:48:19.069750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:48:19.069860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:48:19.069872       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:48:19.069879       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:48:19.070258       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:48:19.071108       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:48:19.074312       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:48:19.082623       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:48:19.084423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:48:19.098920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:48:19.103078       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:48:19.116737       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:48:19.116837       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:48:19.117396       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:48:19.117456       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-135059"
	I1027 22:48:19.117494       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:48:19.119016       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:48:19.119193       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2] <==
	
	
	==> kube-proxy [314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0] <==
	I1027 22:47:15.714437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:47:15.826058       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:47:15.826131       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.114"]
	E1027 22:47:15.826267       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:47:15.978280       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 22:47:15.978389       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 22:47:15.978435       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:47:15.992759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:47:15.993191       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:47:15.993237       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:47:15.997969       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:47:15.998022       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:47:16.005583       1 config.go:200] "Starting service config controller"
	I1027 22:47:16.007269       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:47:16.006156       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:47:16.007900       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:47:16.006407       1 config.go:309] "Starting node config controller"
	I1027 22:47:16.007919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:47:16.007924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:47:16.099002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:47:16.107922       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:47:16.108163       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eacbfc719980a146a0b66c5cef17bf3537c3c76afdd05d0d4dddde70d3e0a365] <==
	I1027 22:48:15.698610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:48:15.802107       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:48:15.802354       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.114"]
	E1027 22:48:15.802467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:48:15.908310       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 22:48:15.908480       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 22:48:15.908546       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:48:15.936308       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:48:15.936968       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:48:15.937116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:48:15.947918       1 config.go:200] "Starting service config controller"
	I1027 22:48:15.948092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:48:15.948151       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:48:15.948174       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:48:15.948204       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:48:15.948226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:48:15.951048       1 config.go:309] "Starting node config controller"
	I1027 22:48:15.951128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:48:15.951157       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:48:16.048286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:48:16.048343       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:48:16.048371       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [734735ba00bcdc4a45f25488b0e3632a7fec34bd379b949d6a252e4d062cd2fc] <==
	E1027 22:47:04.813407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:47:04.816091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:47:04.820592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:47:04.822623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 22:47:04.811747       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1027 22:47:04.828280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:47:04.829381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:47:04.829746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:47:04.829835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:47:04.829871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:47:04.829896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:47:04.829932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:47:04.829943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:47:04.830015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:47:04.830050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:47:04.830075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:47:04.830195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:47:04.830231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:47:04.831108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:47:04.831580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1027 22:47:06.227450       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:47:57.141451       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 22:47:57.149525       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 22:47:57.149538       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 22:47:57.149552       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8aaf5e9497e2a89a3955d46adc0dfa81b0fabd5c3fd3baad08b583cf58146cb9] <==
	I1027 22:48:11.878233       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:48:14.099088       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:48:14.099154       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:48:14.099167       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:48:14.099182       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:48:14.213531       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:48:14.213579       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:48:14.223529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:48:14.224039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:48:14.224075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:48:14.224144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:48:14.324407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.308999    3399 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.312282    3399 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.341912    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-135059\" already exists" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.341946    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.368067    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-135059\" already exists" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.368758    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.384091    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-135059\" already exists" pod="kube-system/kube-controller-manager-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.384333    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: E1027 22:48:14.397591    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-135059\" already exists" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:14 pause-135059 kubelet[3399]: I1027 22:48:14.416449    3399 scope.go:117] "RemoveContainer" containerID="d228777a0d0e285bd9bcfaeb17c219503ce0ee4f676bf3a60c6f2962163dfae2"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.044720    3399 apiserver.go:52] "Watching apiserver"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.125333    3399 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.196759    3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb5cdc0-f13c-4cc0-ae45-993720272d35-lib-modules\") pod \"kube-proxy-nsz84\" (UID: \"5eb5cdc0-f13c-4cc0-ae45-993720272d35\") " pod="kube-system/kube-proxy-nsz84"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.196870    3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb5cdc0-f13c-4cc0-ae45-993720272d35-xtables-lock\") pod \"kube-proxy-nsz84\" (UID: \"5eb5cdc0-f13c-4cc0-ae45-993720272d35\") " pod="kube-system/kube-proxy-nsz84"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.352140    3399 scope.go:117] "RemoveContainer" containerID="314708a637d729c7a8782ee59db95a7e225dba94d14d877dae61f1c9682fc5a0"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.428371    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.433316    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: I1027 22:48:15.440159    3399 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.627894    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-135059\" already exists" pod="kube-system/kube-apiserver-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.627894    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-135059\" already exists" pod="kube-system/etcd-pause-135059"
	Oct 27 22:48:15 pause-135059 kubelet[3399]: E1027 22:48:15.671618    3399 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-135059\" already exists" pod="kube-system/kube-scheduler-pause-135059"
	Oct 27 22:48:23 pause-135059 kubelet[3399]: E1027 22:48:23.395599    3399 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761605303395164871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:23 pause-135059 kubelet[3399]: E1027 22:48:23.395717    3399 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761605303395164871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:33 pause-135059 kubelet[3399]: E1027 22:48:33.403473    3399 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761605313397159815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 22:48:33 pause-135059 kubelet[3399]: E1027 22:48:33.403550    3399 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761605313397159815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-135059 -n pause-135059
helpers_test.go:269: (dbg) Run:  kubectl --context pause-135059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (47.72s)

                                                
                                    

Test pass (297/342)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.58
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 3.22
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.18
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.17
21 TestBinaryMirror 0.68
22 TestOffline 62.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 157.82
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.58
35 TestAddons/parallel/Registry 15.9
36 TestAddons/parallel/RegistryCreds 0.49
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 6.84
41 TestAddons/parallel/CSI 53.9
42 TestAddons/parallel/Headlamp 23.17
43 TestAddons/parallel/CloudSpanner 5.6
44 TestAddons/parallel/LocalPath 58.53
45 TestAddons/parallel/NvidiaDevicePlugin 6.77
46 TestAddons/parallel/Yakd 11.98
48 TestAddons/StoppedEnableDisable 90.6
49 TestCertOptions 46.82
50 TestCertExpiration 275.89
52 TestForceSystemdFlag 85.86
53 TestForceSystemdEnv 69.12
58 TestErrorSpam/setup 39.51
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 85.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.69
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 47.06
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
75 TestFunctional/serial/CacheCmd/cache/add_local 1.56
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 27.38
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.56
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.11
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 28.73
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 8.59
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 45.44
101 TestFunctional/parallel/SSHCmd 0.36
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 23.79
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.41
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.45
114 TestFunctional/parallel/MountCmd/any-port 7.4
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/ProfileCmd/profile_list 0.41
117 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
128 TestFunctional/parallel/MountCmd/specific-port 1.56
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
134 TestFunctional/parallel/ImageCommands/ImageBuild 6.33
135 TestFunctional/parallel/ImageCommands/Setup 1.05
136 TestFunctional/parallel/ServiceCmd/List 0.38
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.56
140 TestFunctional/parallel/ServiceCmd/Format 0.37
141 TestFunctional/parallel/ServiceCmd/URL 0.37
142 TestFunctional/parallel/Version/short 0.13
143 TestFunctional/parallel/Version/components 0.75
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.6
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.82
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.57
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 235.68
161 TestMultiControlPlane/serial/DeployApp 5.76
162 TestMultiControlPlane/serial/PingHostFromPods 1.45
163 TestMultiControlPlane/serial/AddWorkerNode 46.86
164 TestMultiControlPlane/serial/NodeLabels 0.08
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
166 TestMultiControlPlane/serial/CopyFile 11.51
167 TestMultiControlPlane/serial/StopSecondaryNode 83.62
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
169 TestMultiControlPlane/serial/RestartSecondaryNode 43.55
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 388.59
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.18
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 236.01
175 TestMultiControlPlane/serial/RestartCluster 109.13
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
177 TestMultiControlPlane/serial/AddSecondaryNode 78.35
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.71
183 TestJSONOutput/start/Command 88.96
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.05
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
211 TestMainNoArgs 0.07
212 TestMinikubeProfile 86.45
215 TestMountStart/serial/StartWithMountFirst 24.2
216 TestMountStart/serial/VerifyMountFirst 0.31
217 TestMountStart/serial/StartWithMountSecond 24.33
218 TestMountStart/serial/VerifyMountSecond 0.32
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.33
221 TestMountStart/serial/Stop 1.38
222 TestMountStart/serial/RestartStopped 21.44
223 TestMountStart/serial/VerifyMountPostStop 0.32
226 TestMultiNode/serial/FreshStart2Nodes 107.12
227 TestMultiNode/serial/DeployApp2Nodes 4.44
228 TestMultiNode/serial/PingHostFrom2Pods 0.91
229 TestMultiNode/serial/AddNode 43.88
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.5
232 TestMultiNode/serial/CopyFile 6.37
233 TestMultiNode/serial/StopNode 2.46
234 TestMultiNode/serial/StartAfterStop 46.14
235 TestMultiNode/serial/RestartKeepsNodes 299.79
236 TestMultiNode/serial/DeleteNode 2.96
237 TestMultiNode/serial/StopMultiNode 159.76
238 TestMultiNode/serial/RestartMultiNode 89.86
239 TestMultiNode/serial/ValidateNameConflict 41.64
246 TestScheduledStopUnix 113.26
250 TestRunningBinaryUpgrade 147.91
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
264 TestPause/serial/Start 106.87
265 TestNoKubernetes/serial/StartWithK8s 87.12
266 TestNoKubernetes/serial/StartWithStopK8s 32.02
268 TestNoKubernetes/serial/Start 36.82
276 TestNetworkPlugins/group/false 4.92
280 TestISOImage/Setup 33.31
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
282 TestNoKubernetes/serial/ProfileList 0.5
283 TestNoKubernetes/serial/Stop 1.39
284 TestNoKubernetes/serial/StartNoArgs 80.34
286 TestISOImage/Binaries/crictl 0.17
287 TestISOImage/Binaries/curl 0.18
288 TestISOImage/Binaries/docker 0.19
289 TestISOImage/Binaries/git 0.18
290 TestISOImage/Binaries/iptables 0.19
291 TestISOImage/Binaries/podman 0.19
292 TestISOImage/Binaries/rsync 0.17
293 TestISOImage/Binaries/socat 0.17
294 TestISOImage/Binaries/wget 0.17
295 TestISOImage/Binaries/VBoxControl 0.18
296 TestISOImage/Binaries/VBoxService 0.17
298 TestStartStop/group/old-k8s-version/serial/FirstStart 154.22
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
301 TestStartStop/group/no-preload/serial/FirstStart 147.56
302 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
304 TestStartStop/group/old-k8s-version/serial/Stop 81.94
305 TestStartStop/group/no-preload/serial/DeployApp 9.3
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
307 TestStartStop/group/no-preload/serial/Stop 89.87
309 TestStartStop/group/embed-certs/serial/FirstStart 60.37
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
311 TestStartStop/group/old-k8s-version/serial/SecondStart 63.85
312 TestStartStop/group/embed-certs/serial/DeployApp 8.32
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.01
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
315 TestStartStop/group/no-preload/serial/SecondStart 61.55
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.42
317 TestStartStop/group/embed-certs/serial/Stop 86.64
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/old-k8s-version/serial/Pause 2.99
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.89
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
326 TestStartStop/group/no-preload/serial/Pause 2.89
327 TestStoppedBinaryUpgrade/Setup 0.6
328 TestStoppedBinaryUpgrade/Upgrade 102.35
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
330 TestStartStop/group/embed-certs/serial/SecondStart 75.64
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.19
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.7
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 85.02
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
337 TestStartStop/group/embed-certs/serial/Pause 3.01
339 TestStartStop/group/newest-cni/serial/FirstStart 49.81
340 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
341 TestNetworkPlugins/group/auto/Start 103.26
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.13
344 TestStartStop/group/newest-cni/serial/Stop 10.47
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
346 TestStartStop/group/newest-cni/serial/SecondStart 37.8
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.2
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
352 TestStartStop/group/newest-cni/serial/Pause 2.98
353 TestNetworkPlugins/group/kindnet/Start 94.25
354 TestNetworkPlugins/group/auto/KubeletFlags 0.3
355 TestNetworkPlugins/group/auto/NetCatPod 10.39
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.22
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
360 TestNetworkPlugins/group/calico/Start 74.76
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.42
364 TestNetworkPlugins/group/custom-flannel/Start 81.07
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
367 TestNetworkPlugins/group/kindnet/NetCatPod 11.31
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/DNS 0.17
370 TestNetworkPlugins/group/kindnet/Localhost 0.16
371 TestNetworkPlugins/group/calico/KubeletFlags 0.21
372 TestNetworkPlugins/group/kindnet/HairPin 0.14
373 TestNetworkPlugins/group/calico/NetCatPod 11.3
374 TestNetworkPlugins/group/calico/DNS 0.17
375 TestNetworkPlugins/group/calico/Localhost 0.14
376 TestNetworkPlugins/group/calico/HairPin 0.15
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
379 TestNetworkPlugins/group/enable-default-cni/Start 86.42
380 TestNetworkPlugins/group/custom-flannel/DNS 0.17
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
383 TestNetworkPlugins/group/flannel/Start 84.42
384 TestNetworkPlugins/group/bridge/Start 73.77
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
392 TestNetworkPlugins/group/bridge/NetCatPod 11.26
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
394 TestNetworkPlugins/group/flannel/NetCatPod 11.28
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.15
397 TestNetworkPlugins/group/bridge/HairPin 0.15
398 TestNetworkPlugins/group/flannel/DNS 0.2
399 TestNetworkPlugins/group/flannel/Localhost 0.16
400 TestNetworkPlugins/group/flannel/HairPin 0.15
402 TestISOImage/PersistentMounts//data 0.17
403 TestISOImage/PersistentMounts//var/lib/docker 0.17
404 TestISOImage/PersistentMounts//var/lib/cni 0.18
405 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
406 TestISOImage/PersistentMounts//var/lib/minikube 0.17
407 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
408 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
x
+
TestDownloadOnly/v1.28.0/json-events (6.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-129489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-129489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.579759046s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 21:49:15.015331  356621 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 21:49:15.015436  356621 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-129489
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-129489: exit status 85 (88.092362ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-129489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-129489 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:49:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:49:08.492725  356633 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:49:08.493350  356633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:08.493361  356633 out.go:374] Setting ErrFile to fd 2...
	I1027 21:49:08.493366  356633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:08.493578  356633 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	W1027 21:49:08.493717  356633 root.go:316] Error reading config file at /home/jenkins/minikube-integration/21790-352679/.minikube/config/config.json: open /home/jenkins/minikube-integration/21790-352679/.minikube/config/config.json: no such file or directory
	I1027 21:49:08.494271  356633 out.go:368] Setting JSON to true
	I1027 21:49:08.495372  356633 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5496,"bootTime":1761596253,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:49:08.495499  356633 start.go:143] virtualization: kvm guest
	I1027 21:49:08.497976  356633 out.go:99] [download-only-129489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1027 21:49:08.498188  356633 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 21:49:08.498225  356633 notify.go:221] Checking for updates...
	I1027 21:49:08.499633  356633 out.go:171] MINIKUBE_LOCATION=21790
	I1027 21:49:08.501308  356633 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:49:08.503119  356633 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 21:49:08.507605  356633 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 21:49:08.508981  356633 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 21:49:08.511231  356633 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 21:49:08.511515  356633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:49:08.543884  356633 out.go:99] Using the kvm2 driver based on user configuration
	I1027 21:49:08.543955  356633 start.go:307] selected driver: kvm2
	I1027 21:49:08.543964  356633 start.go:928] validating driver "kvm2" against <nil>
	I1027 21:49:08.544300  356633 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:49:08.544777  356633 start_flags.go:409] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1027 21:49:08.544953  356633 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 21:49:08.544985  356633 cni.go:84] Creating CNI manager for ""
	I1027 21:49:08.545041  356633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 21:49:08.545050  356633 start_flags.go:335] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 21:49:08.545095  356633 start.go:351] cluster config:
	{Name:download-only-129489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-129489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:49:08.545288  356633 iso.go:125] acquiring lock: {Name:mk5fa90c3652bd8ab3eadfb83a864dcc2e121c25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 21:49:08.546965  356633 out.go:99] Downloading VM boot image ...
	I1027 21:49:08.547004  356633 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21790-352679/.minikube/cache/iso/amd64/minikube-v1.37.0-1761414747-21797-amd64.iso
	I1027 21:49:11.745351  356633 out.go:99] Starting "download-only-129489" primary control-plane node in "download-only-129489" cluster
	I1027 21:49:11.745387  356633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 21:49:11.773682  356633 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 21:49:11.773717  356633 cache.go:59] Caching tarball of preloaded images
	I1027 21:49:11.773933  356633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 21:49:11.775924  356633 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 21:49:11.775953  356633 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 21:49:11.802741  356633 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1027 21:49:11.802871  356633 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-129489 host does not exist
	  To start a cluster, run: "minikube start -p download-only-129489"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-129489
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-598387 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-598387 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.214972753s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 21:49:18.652991  356621 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 21:49:18.653043  356621 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-352679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-598387
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-598387: exit status 85 (80.258434ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-129489 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-129489 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │ 27 Oct 25 21:49 UTC │
	│ delete  │ -p download-only-129489                                                                                                                                                 │ download-only-129489 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │ 27 Oct 25 21:49 UTC │
	│ start   │ -o=json --download-only -p download-only-598387 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-598387 │ jenkins │ v1.37.0 │ 27 Oct 25 21:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:49:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:49:15.493241  356826 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:49:15.493536  356826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:15.493547  356826 out.go:374] Setting ErrFile to fd 2...
	I1027 21:49:15.493553  356826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:49:15.493787  356826 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 21:49:15.494335  356826 out.go:368] Setting JSON to true
	I1027 21:49:15.495421  356826 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5503,"bootTime":1761596253,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:49:15.495539  356826 start.go:143] virtualization: kvm guest
	I1027 21:49:15.498091  356826 out.go:99] [download-only-598387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 21:49:15.498358  356826 notify.go:221] Checking for updates...
	I1027 21:49:15.500184  356826 out.go:171] MINIKUBE_LOCATION=21790
	I1027 21:49:15.501793  356826 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:49:15.503222  356826 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 21:49:15.504673  356826 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 21:49:15.506167  356826 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-598387 host does not exist
	  To start a cluster, run: "minikube start -p download-only-598387"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-598387
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 21:49:19.394839  356621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-176666 --alsologtostderr --binary-mirror http://127.0.0.1:36419 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-176666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-176666
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (62.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-796981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-796981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.073481527s)
helpers_test.go:175: Cleaning up "offline-crio-796981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-796981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-796981: (1.04196449s)
--- PASS: TestOffline (62.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-865238
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-865238: exit status 85 (74.496551ms)

                                                
                                                
-- stdout --
	* Profile "addons-865238" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-865238"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-865238
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-865238: exit status 85 (74.439305ms)

                                                
                                                
-- stdout --
	* Profile "addons-865238" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-865238"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (157.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-865238 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-865238 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.818752581s)
--- PASS: TestAddons/Setup (157.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-865238 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-865238 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-865238 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-865238 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [88657516-1699-4de3-80c1-13dffabfc378] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [88657516-1699-4de3-80c1-13dffabfc378] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004649188s
addons_test.go:694: (dbg) Run:  kubectl --context addons-865238 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-865238 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-865238 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.378358ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9j6vm" [9ee1777e-23f5-4221-b374-9a1234ea50f4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010786868s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vv9hz" [e0b028ef-aa0b-4b73-ac69-1e31aeb5123a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009584808s
addons_test.go:392: (dbg) Run:  kubectl --context addons-865238 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-865238 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-865238 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.463313339s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable registry --alsologtostderr -v=1: (1.226102575s)
--- PASS: TestAddons/parallel/Registry (15.90s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.385324ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-865238
addons_test.go:332: (dbg) Run:  kubectl --context addons-865238 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jhhrx" [419b3ce9-a1a3-4f6d-881d-b766bf86b6f9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003679361s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.992869ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-dsd4x" [16454441-4f3c-4401-b6ca-c56647697e9e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004859799s
addons_test.go:463: (dbg) Run:  kubectl --context addons-865238 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 21:52:35.310063  356621 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 21:52:35.325297  356621 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 21:52:35.325340  356621 kapi.go:107] duration metric: took 15.300937ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 15.315674ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-865238 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-865238 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bc72561b-119e-465a-863d-c47df9aee940] Pending
helpers_test.go:352: "task-pv-pod" [bc72561b-119e-465a-863d-c47df9aee940] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bc72561b-119e-465a-863d-c47df9aee940] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005414443s
addons_test.go:572: (dbg) Run:  kubectl --context addons-865238 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-865238 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-865238 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-865238 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-865238 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-865238 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-865238 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e0bd026c-5d4e-4f2e-aeaa-fbf4d5ee7864] Pending
helpers_test.go:352: "task-pv-pod-restore" [e0bd026c-5d4e-4f2e-aeaa-fbf4d5ee7864] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e0bd026c-5d4e-4f2e-aeaa-fbf4d5ee7864] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005135926s
addons_test.go:614: (dbg) Run:  kubectl --context addons-865238 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-865238 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-865238 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.084566929s)
--- PASS: TestAddons/parallel/CSI (53.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-865238 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-865238 --alsologtostderr -v=1: (1.28881806s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-htldg" [a6162227-0fa5-4f68-99f5-10cc9fc3059e] Pending
helpers_test.go:352: "headlamp-6945c6f4d-htldg" [a6162227-0fa5-4f68-99f5-10cc9fc3059e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-htldg" [a6162227-0fa5-4f68-99f5-10cc9fc3059e] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.005332258s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable headlamp --alsologtostderr -v=1: (5.870453311s)
--- PASS: TestAddons/parallel/Headlamp (23.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-brgbm" [ffce1cc3-9e78-48fb-a058-d1f33ade33ff] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003730059s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-865238 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-865238 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c5e0eced-afdf-490d-90ae-3e1d7c4814e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c5e0eced-afdf-490d-90ae-3e1d7c4814e9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c5e0eced-afdf-490d-90ae-3e1d7c4814e9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.014975549s
addons_test.go:967: (dbg) Run:  kubectl --context addons-865238 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 ssh "cat /opt/local-path-provisioner/pvc-9a3490da-d28f-4010-8838-0a8f9b29e40e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-865238 delete pod test-local-path
2025/10/27 21:52:31 [DEBUG] GET http://192.168.39.175:5000
addons_test.go:992: (dbg) Run:  kubectl --context addons-865238 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.602326107s)
--- PASS: TestAddons/parallel/LocalPath (58.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xdn5t" [2f3c3b15-8971-406b-99c3-881d986c3fa5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004206557s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-z6vtd" [ced29f96-31b5-47ab-92bc-9fd57278dee0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005765099s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-865238 addons disable yakd --alsologtostderr -v=1: (5.974520928s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-865238
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-865238: (1m30.379904477s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-865238
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-865238
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-865238
--- PASS: TestAddons/StoppedEnableDisable (90.60s)

                                                
                                    
x
+
TestCertOptions (46.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-010556 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-010556 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.482808218s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-010556 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-010556 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-010556 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-010556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-010556
--- PASS: TestCertOptions (46.82s)

                                                
                                    
x
+
TestCertExpiration (275.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-858253 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-858253 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.850185176s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-858253 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-858253 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (23.143974899s)
helpers_test.go:175: Cleaning up "cert-expiration-858253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-858253
--- PASS: TestCertExpiration (275.89s)

                                                
                                    
x
+
TestForceSystemdFlag (85.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-364313 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-364313 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.806983447s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-364313 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-364313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-364313
--- PASS: TestForceSystemdFlag (85.86s)

                                                
                                    
x
+
TestForceSystemdEnv (69.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-672210 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-672210 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.168984539s)
helpers_test.go:175: Cleaning up "force-systemd-env-672210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-672210
--- PASS: TestForceSystemdEnv (69.12s)

                                                
                                    
x
+
TestErrorSpam/setup (39.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-226154 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-226154 --driver=kvm2  --container-runtime=crio
E1027 21:56:58.688269  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:58.694839  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:58.706337  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:58.727939  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:58.769579  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:58.851168  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:59.012839  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:59.334712  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:56:59.976918  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:57:01.258649  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:57:03.821679  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:57:08.943558  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:57:19.185198  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-226154 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-226154 --driver=kvm2  --container-runtime=crio: (39.51160017s)
--- PASS: TestErrorSpam/setup (39.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (85.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop
E1027 21:57:39.666724  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 21:58:20.629874  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop: (1m22.667567547s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop: (1.298461886s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-226154 --log_dir /tmp/nospam-226154 stop: (1.115734054s)
--- PASS: TestErrorSpam/stop (85.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21790-352679/.minikube/files/etc/test/nested/copy/356621/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1027 21:59:42.552997  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-880510 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.688474648s)
--- PASS: TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 22:00:21.241545  356621 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-880510 --alsologtostderr -v=8: (47.058178755s)
functional_test.go:678: soft start took 47.059065791s for "functional-880510" cluster.
I1027 22:01:08.300213  356621 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (47.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-880510 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:3.1: (1.202808885s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:3.3: (1.123575851s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 cache add registry.k8s.io/pause:latest: (1.268645889s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-880510 /tmp/TestFunctionalserialCacheCmdcacheadd_local1781529976/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache add minikube-local-cache-test:functional-880510
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 cache add minikube-local-cache-test:functional-880510: (1.1777424s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache delete minikube-local-cache-test:functional-880510
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-880510
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (190.576901ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 cache reload: (1.041565974s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 kubectl -- --context functional-880510 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-880510 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-880510 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.381611045s)
functional_test.go:776: restart took 27.381757757s for "functional-880510" cluster.
I1027 22:01:43.377618  356621 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (27.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-880510 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 logs: (1.554765776s)
--- PASS: TestFunctional/serial/LogsCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 logs --file /tmp/TestFunctionalserialLogsFileCmd1240336625/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 logs --file /tmp/TestFunctionalserialLogsFileCmd1240336625/001/logs.txt: (1.544747484s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-880510 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-880510
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-880510: exit status 115 (260.718678ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.204:32280 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-880510 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 config get cpus: exit status 14 (74.212289ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 config get cpus: exit status 14 (67.496565ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-880510 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-880510 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 363000: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-880510 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.575326ms)

                                                
                                                
-- stdout --
	* [functional-880510] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:02:03.483548  362872 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:02:03.483810  362872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:03.483820  362872 out.go:374] Setting ErrFile to fd 2...
	I1027 22:02:03.483825  362872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:03.484091  362872 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:02:03.484673  362872 out.go:368] Setting JSON to false
	I1027 22:02:03.485752  362872 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6270,"bootTime":1761596253,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:02:03.485856  362872 start.go:143] virtualization: kvm guest
	I1027 22:02:03.491128  362872 out.go:179] * [functional-880510] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:02:03.492800  362872 notify.go:221] Checking for updates...
	I1027 22:02:03.492875  362872 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:02:03.494688  362872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:02:03.498154  362872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:02:03.499683  362872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:02:03.502314  362872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:02:03.506174  362872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:02:03.508779  362872 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:02:03.509523  362872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:02:03.547957  362872 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 22:02:03.549265  362872 start.go:307] selected driver: kvm2
	I1027 22:02:03.549282  362872 start.go:928] validating driver "kvm2" against &{Name:functional-880510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-880510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:02:03.549419  362872 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:02:03.551922  362872 out.go:203] 
	W1027 22:02:03.552995  362872 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 22:02:03.554054  362872 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880510 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-880510 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.068785ms)

                                                
                                                
-- stdout --
	* [functional-880510] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:02:01.730405  362726 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:02:01.730683  362726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:01.730692  362726 out.go:374] Setting ErrFile to fd 2...
	I1027 22:02:01.730696  362726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:01.731081  362726 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:02:01.731579  362726 out.go:368] Setting JSON to false
	I1027 22:02:01.732616  362726 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6269,"bootTime":1761596253,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:02:01.732710  362726 start.go:143] virtualization: kvm guest
	I1027 22:02:01.736055  362726 out.go:179] * [functional-880510] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1027 22:02:01.738248  362726 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:02:01.738249  362726 notify.go:221] Checking for updates...
	I1027 22:02:01.739817  362726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:02:01.741474  362726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:02:01.743461  362726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:02:01.745137  362726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:02:01.746512  362726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:02:01.748578  362726 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:02:01.749324  362726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:02:01.786964  362726 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1027 22:02:01.789439  362726 start.go:307] selected driver: kvm2
	I1027 22:02:01.789467  362726 start.go:928] validating driver "kvm2" against &{Name:functional-880510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-880510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:02:01.789628  362726 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:02:01.792217  362726 out.go:203] 
	W1027 22:02:01.793562  362726 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 22:02:01.794784  362726 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-880510 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-880510 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hv5xr" [22f4067e-7484-448a-9c33-1da56e971c3a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hv5xr" [22f4067e-7484-448a-9c33-1da56e971c3a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008602152s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.204:30218
functional_test.go:1680: http://192.168.39.204:30218: success! body:
Request served by hello-node-connect-7d85dfc575-hv5xr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.204:30218
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1282dcbf-870d-427c-bc55-aa4870a73761] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006308943s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-880510 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-880510 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-880510 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-880510 apply -f testdata/storage-provisioner/pod.yaml
I1027 22:01:58.582324  356621 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ce5c9131-bbcd-48ef-b357-25e5f28df432] Pending
E1027 22:01:58.680706  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [ce5c9131-bbcd-48ef-b357-25e5f28df432] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ce5c9131-bbcd-48ef-b357-25e5f28df432] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.011720848s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-880510 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-880510 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-880510 delete -f testdata/storage-provisioner/pod.yaml: (2.50015451s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-880510 apply -f testdata/storage-provisioner/pod.yaml
I1027 22:02:16.439007  356621 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dd7484eb-aa8c-400d-8fbe-51c15987d71b] Pending
helpers_test.go:352: "sp-pod" [dd7484eb-aa8c-400d-8fbe-51c15987d71b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [dd7484eb-aa8c-400d-8fbe-51c15987d71b] Running
2025/10/27 22:02:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.006170746s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-880510 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh -n functional-880510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cp functional-880510:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3891000106/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh -n functional-880510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh -n functional-880510 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-880510 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-gsktv" [0813d580-9fb0-4468-81fe-a2d354d182cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-gsktv" [0813d580-9fb0-4468-81fe-a2d354d182cc] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.271013778s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-880510 exec mysql-5bb876957f-gsktv -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-880510 exec mysql-5bb876957f-gsktv -- mysql -ppassword -e "show databases;": exit status 1 (247.184672ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 22:02:27.427389  356621 retry.go:31] will retry after 827.587056ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-880510 exec mysql-5bb876957f-gsktv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/356621/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /etc/test/nested/copy/356621/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/356621.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /etc/ssl/certs/356621.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/356621.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /usr/share/ca-certificates/356621.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3566212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /etc/ssl/certs/3566212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3566212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /usr/share/ca-certificates/3566212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-880510 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "sudo systemctl is-active docker": exit status 1 (239.753159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "sudo systemctl is-active containerd": exit status 1 (265.931234ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdany-port536635389/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761602510669445505" to /tmp/TestFunctionalparallelMountCmdany-port536635389/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761602510669445505" to /tmp/TestFunctionalparallelMountCmdany-port536635389/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761602510669445505" to /tmp/TestFunctionalparallelMountCmdany-port536635389/001/test-1761602510669445505
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.753325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:50.867638  356621 retry.go:31] will retry after 738.879853ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 test-1761602510669445505
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh cat /mount-9p/test-1761602510669445505
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-880510 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [49592d11-8000-455c-abf8-63ec37a10eb2] Pending
helpers_test.go:352: "busybox-mount" [49592d11-8000-455c-abf8-63ec37a10eb2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [49592d11-8000-455c-abf8-63ec37a10eb2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [49592d11-8000-455c-abf8-63ec37a10eb2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003768344s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-880510 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdany-port536635389/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "345.041508ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.192736ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-880510 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-880510 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-m5jvt" [f2159834-763d-4075-9c9b-e307b38ffb26] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-m5jvt" [f2159834-763d-4075-9c9b-e307b38ffb26] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005763895s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "423.453638ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.751272ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdspecific-port2302256795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.618554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:58.239126  356621 retry.go:31] will retry after 642.898473ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdspecific-port2302256795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "sudo umount -f /mount-9p": exit status 1 (167.496683ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-880510 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdspecific-port2302256795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T" /mount1: exit status 1 (179.065633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:59.804728  356621 retry.go:31] will retry after 708.489419ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-880510 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880510 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3053967631/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880510 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-880510
localhost/kicbase/echo-server:functional-880510
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880510 image ls --format short --alsologtostderr:
I1027 22:02:21.570078  363322 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:21.570470  363322 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:21.570485  363322 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:21.570492  363322 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:21.570828  363322 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
I1027 22:02:21.571742  363322 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:21.571935  363322 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:21.574783  363322 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:21.577665  363322 main.go:143] libmachine: domain functional-880510 has defined MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:21.578347  363322 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:c1:c1", ip: ""} in network mk-functional-880510: {Iface:virbr1 ExpiryTime:2025-10-27 22:59:15 +0000 UTC Type:0 Mac:52:54:00:c0:c1:c1 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:functional-880510 Clientid:01:52:54:00:c0:c1:c1}
I1027 22:02:21.578385  363322 main.go:143] libmachine: domain functional-880510 has defined IP address 192.168.39.204 and MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:21.578629  363322 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/functional-880510/id_rsa Username:docker}
I1027 22:02:21.681549  363322 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880510 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-880510  │ 289395d6ad947 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-880510  │ 0dfb5dfe59b8f │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-880510  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880510 image ls --format table --alsologtostderr:
I1027 22:02:28.488954  363423 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:28.489101  363423 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:28.489115  363423 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:28.489121  363423 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:28.489332  363423 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
I1027 22:02:28.489949  363423 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:28.490051  363423 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:28.492234  363423 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:28.494958  363423 main.go:143] libmachine: domain functional-880510 has defined MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:28.495370  363423 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:c1:c1", ip: ""} in network mk-functional-880510: {Iface:virbr1 ExpiryTime:2025-10-27 22:59:15 +0000 UTC Type:0 Mac:52:54:00:c0:c1:c1 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:functional-880510 Clientid:01:52:54:00:c0:c1:c1}
I1027 22:02:28.495399  363423 main.go:143] libmachine: domain functional-880510 has defined IP address 192.168.39.204 and MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:28.495607  363423 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/functional-880510/id_rsa Username:docker}
I1027 22:02:28.587816  363423 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880510 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":
"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"ae10c18eaabcec0bf6f03df7a7e886196110ae070b364c9c2b6bd6b13308b4b9","repoDigests":["docker.io/library/acdc2ea4f080b361f7688fd17ab079c3c0bf536b4e0ef402520203e2e08a76be-tmp@sha256:8cb2bfe1c57c8794e75ee48cb77e80609f5fb21f8d12dc53fbe8545f10ebdef6"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"657fdcd1c3659cf57cfaa
13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195
976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbas
e/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-880510"],"size":"4944818"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0dfb5dfe59b8f8ad9d2fc6f2fa0669d27d82ab63a9375f895357b9b0d7ca789e","repoDigests":["localhost/minikube-local-cache-test@sha256:b3638ab226c
e0f9903ad84213a4175f2b5453a5da507e52cd2a9301580b9a3a2"],"repoTags":["localhost/minikube-local-cache-test:functional-880510"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-
scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"289395d6ad947f516d9a67751b15a0a223fd24625754fe082ab4d5e9f8a4c492","repoDigests":["localhost/my-image@sha256:48fb4e517e86c1bcda6e7638a71c63fafa8be51146519fa706f47ffb3d97ca59"],"repoTags":["localhost/my-image:functional-880510"],"size":"1468600"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5
d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880510 image ls --format json --alsologtostderr:
I1027 22:02:28.444172  363412 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:28.444532  363412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:28.444549  363412 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:28.444556  363412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:28.444934  363412 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
I1027 22:02:28.445980  363412 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:28.446166  363412 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:28.448585  363412 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:28.451196  363412 main.go:143] libmachine: domain functional-880510 has defined MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:28.451645  363412 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:c1:c1", ip: ""} in network mk-functional-880510: {Iface:virbr1 ExpiryTime:2025-10-27 22:59:15 +0000 UTC Type:0 Mac:52:54:00:c0:c1:c1 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:functional-880510 Clientid:01:52:54:00:c0:c1:c1}
I1027 22:02:28.451675  363412 main.go:143] libmachine: domain functional-880510 has defined IP address 192.168.39.204 and MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:28.451860  363412 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/functional-880510/id_rsa Username:docker}
I1027 22:02:28.541587  363412 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880510 image ls --format yaml --alsologtostderr:
- id: 0dfb5dfe59b8f8ad9d2fc6f2fa0669d27d82ab63a9375f895357b9b0d7ca789e
repoDigests:
- localhost/minikube-local-cache-test@sha256:b3638ab226ce0f9903ad84213a4175f2b5453a5da507e52cd2a9301580b9a3a2
repoTags:
- localhost/minikube-local-cache-test:functional-880510
size: "3330"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-880510
size: "4944818"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880510 image ls --format yaml --alsologtostderr:
I1027 22:02:21.812650  363332 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:21.812957  363332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:21.812970  363332 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:21.812977  363332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:21.813204  363332 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
I1027 22:02:21.813852  363332 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:21.814003  363332 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:21.816310  363332 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:21.818940  363332 main.go:143] libmachine: domain functional-880510 has defined MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:21.819538  363332 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:c1:c1", ip: ""} in network mk-functional-880510: {Iface:virbr1 ExpiryTime:2025-10-27 22:59:15 +0000 UTC Type:0 Mac:52:54:00:c0:c1:c1 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:functional-880510 Clientid:01:52:54:00:c0:c1:c1}
I1027 22:02:21.819577  363332 main.go:143] libmachine: domain functional-880510 has defined IP address 192.168.39.204 and MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:21.819762  363332 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/functional-880510/id_rsa Username:docker}
I1027 22:02:21.921924  363332 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880510 ssh pgrep buildkitd: exit status 1 (166.456933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image build -t localhost/my-image:functional-880510 testdata/build --alsologtostderr
E1027 22:02:26.394459  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 image build -t localhost/my-image:functional-880510 testdata/build --alsologtostderr: (5.901912643s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880510 image build -t localhost/my-image:functional-880510 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ae10c18eaab
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-880510
--> 289395d6ad9
Successfully tagged localhost/my-image:functional-880510
289395d6ad947f516d9a67751b15a0a223fd24625754fe082ab4d5e9f8a4c492
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880510 image build -t localhost/my-image:functional-880510 testdata/build --alsologtostderr:
I1027 22:02:22.271336  363354 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:22.271619  363354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:22.271631  363354 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:22.271636  363354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:22.271863  363354 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
I1027 22:02:22.272501  363354 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:22.273264  363354 config.go:182] Loaded profile config "functional-880510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:22.275546  363354 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:22.278195  363354 main.go:143] libmachine: domain functional-880510 has defined MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:22.278633  363354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:c1:c1", ip: ""} in network mk-functional-880510: {Iface:virbr1 ExpiryTime:2025-10-27 22:59:15 +0000 UTC Type:0 Mac:52:54:00:c0:c1:c1 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:functional-880510 Clientid:01:52:54:00:c0:c1:c1}
I1027 22:02:22.278659  363354 main.go:143] libmachine: domain functional-880510 has defined IP address 192.168.39.204 and MAC address 52:54:00:c0:c1:c1 in network mk-functional-880510
I1027 22:02:22.278790  363354 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/functional-880510/id_rsa Username:docker}
I1027 22:02:22.383759  363354 build_images.go:162] Building image from path: /tmp/build.755285705.tar
I1027 22:02:22.383843  363354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 22:02:22.399926  363354 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.755285705.tar
I1027 22:02:22.411160  363354 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.755285705.tar: stat -c "%s %y" /var/lib/minikube/build/build.755285705.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.755285705.tar': No such file or directory
I1027 22:02:22.411223  363354 ssh_runner.go:362] scp /tmp/build.755285705.tar --> /var/lib/minikube/build/build.755285705.tar (3072 bytes)
I1027 22:02:22.462015  363354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.755285705
I1027 22:02:22.490259  363354 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.755285705 -xf /var/lib/minikube/build/build.755285705.tar
I1027 22:02:22.516311  363354 crio.go:315] Building image: /var/lib/minikube/build/build.755285705
I1027 22:02:22.516409  363354 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-880510 /var/lib/minikube/build/build.755285705 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1027 22:02:28.050825  363354 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-880510 /var/lib/minikube/build/build.755285705 --cgroup-manager=cgroupfs: (5.534370561s)
I1027 22:02:28.050931  363354 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.755285705
I1027 22:02:28.078346  363354 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.755285705.tar
I1027 22:02:28.103316  363354 build_images.go:218] Built localhost/my-image:functional-880510 from /tmp/build.755285705.tar
I1027 22:02:28.103369  363354 build_images.go:134] succeeded building to: functional-880510
I1027 22:02:28.103377  363354 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.023014875s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-880510
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service list -o json
functional_test.go:1504: Took "336.999365ms" to run "out/minikube-linux-amd64 -p functional-880510 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.204:31477
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image load --daemon kicbase/echo-server:functional-880510 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 image load --daemon kicbase/echo-server:functional-880510 --alsologtostderr: (4.241041196s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.204:31477
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image load --daemon kicbase/echo-server:functional-880510 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-880510
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image load --daemon kicbase/echo-server:functional-880510 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image save kicbase/echo-server:functional-880510 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-880510 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.281741709s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-880510
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-880510 image save --daemon kicbase/echo-server:functional-880510 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-880510
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-880510
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-880510
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-880510
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (235.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m55.040754437s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (235.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 kubectl -- rollout status deployment/busybox: (3.210673546s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-dbkcl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-qbd7v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-vb4mz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-dbkcl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-qbd7v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-vb4mz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-dbkcl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-qbd7v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-vb4mz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-dbkcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-dbkcl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-qbd7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-qbd7v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-vb4mz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 kubectl -- exec busybox-7b57f96db7-vb4mz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node add --alsologtostderr -v 5
E1027 22:06:51.354514  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.361027  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.372512  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.394096  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.435634  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.517162  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:51.678921  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:52.001080  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:52.643200  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:53.924934  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:56.487302  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:58.680491  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:07:01.609315  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:07:11.851402  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 node add --alsologtostderr -v 5: (46.129637152s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-190114 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp testdata/cp-test.txt ha-190114:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1410447709/001/cp-test_ha-190114.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114:/home/docker/cp-test.txt ha-190114-m02:/home/docker/cp-test_ha-190114_ha-190114-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test_ha-190114_ha-190114-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114:/home/docker/cp-test.txt ha-190114-m03:/home/docker/cp-test_ha-190114_ha-190114-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test_ha-190114_ha-190114-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114:/home/docker/cp-test.txt ha-190114-m04:/home/docker/cp-test_ha-190114_ha-190114-m04.txt
E1027 22:07:32.333257  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test_ha-190114_ha-190114-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp testdata/cp-test.txt ha-190114-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1410447709/001/cp-test_ha-190114-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m02:/home/docker/cp-test.txt ha-190114:/home/docker/cp-test_ha-190114-m02_ha-190114.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test_ha-190114-m02_ha-190114.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m02:/home/docker/cp-test.txt ha-190114-m03:/home/docker/cp-test_ha-190114-m02_ha-190114-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test_ha-190114-m02_ha-190114-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m02:/home/docker/cp-test.txt ha-190114-m04:/home/docker/cp-test_ha-190114-m02_ha-190114-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test_ha-190114-m02_ha-190114-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp testdata/cp-test.txt ha-190114-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1410447709/001/cp-test_ha-190114-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m03:/home/docker/cp-test.txt ha-190114:/home/docker/cp-test_ha-190114-m03_ha-190114.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test_ha-190114-m03_ha-190114.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m03:/home/docker/cp-test.txt ha-190114-m02:/home/docker/cp-test_ha-190114-m03_ha-190114-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test_ha-190114-m03_ha-190114-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m03:/home/docker/cp-test.txt ha-190114-m04:/home/docker/cp-test_ha-190114-m03_ha-190114-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test_ha-190114-m03_ha-190114-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp testdata/cp-test.txt ha-190114-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1410447709/001/cp-test_ha-190114-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m04:/home/docker/cp-test.txt ha-190114:/home/docker/cp-test_ha-190114-m04_ha-190114.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114 "sudo cat /home/docker/cp-test_ha-190114-m04_ha-190114.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m04:/home/docker/cp-test.txt ha-190114-m02:/home/docker/cp-test_ha-190114-m04_ha-190114-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m02 "sudo cat /home/docker/cp-test_ha-190114-m04_ha-190114-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 cp ha-190114-m04:/home/docker/cp-test.txt ha-190114-m03:/home/docker/cp-test_ha-190114-m04_ha-190114-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 ssh -n ha-190114-m03 "sudo cat /home/docker/cp-test_ha-190114-m04_ha-190114-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node stop m02 --alsologtostderr -v 5
E1027 22:08:13.295712  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 node stop m02 --alsologtostderr -v 5: (1m23.063379833s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5: exit status 7 (551.32196ms)

                                                
                                                
-- stdout --
	ha-190114
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190114-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-190114-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-190114-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:09:03.939076  366582 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:09:03.939356  366582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:09:03.939366  366582 out.go:374] Setting ErrFile to fd 2...
	I1027 22:09:03.939370  366582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:09:03.939571  366582 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:09:03.939756  366582 out.go:368] Setting JSON to false
	I1027 22:09:03.939787  366582 mustload.go:66] Loading cluster: ha-190114
	I1027 22:09:03.939970  366582 notify.go:221] Checking for updates...
	I1027 22:09:03.940242  366582 config.go:182] Loaded profile config "ha-190114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:09:03.940266  366582 status.go:174] checking status of ha-190114 ...
	I1027 22:09:03.942621  366582 status.go:371] ha-190114 host status = "Running" (err=<nil>)
	I1027 22:09:03.942653  366582 host.go:66] Checking if "ha-190114" exists ...
	I1027 22:09:03.945547  366582 main.go:143] libmachine: domain ha-190114 has defined MAC address 52:54:00:44:9b:9c in network mk-ha-190114
	I1027 22:09:03.946185  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:9b:9c", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:02:55 +0000 UTC Type:0 Mac:52:54:00:44:9b:9c Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-190114 Clientid:01:52:54:00:44:9b:9c}
	I1027 22:09:03.946215  366582 main.go:143] libmachine: domain ha-190114 has defined IP address 192.168.39.189 and MAC address 52:54:00:44:9b:9c in network mk-ha-190114
	I1027 22:09:03.946379  366582 host.go:66] Checking if "ha-190114" exists ...
	I1027 22:09:03.946590  366582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:09:03.949060  366582 main.go:143] libmachine: domain ha-190114 has defined MAC address 52:54:00:44:9b:9c in network mk-ha-190114
	I1027 22:09:03.949436  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:9b:9c", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:02:55 +0000 UTC Type:0 Mac:52:54:00:44:9b:9c Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-190114 Clientid:01:52:54:00:44:9b:9c}
	I1027 22:09:03.949470  366582 main.go:143] libmachine: domain ha-190114 has defined IP address 192.168.39.189 and MAC address 52:54:00:44:9b:9c in network mk-ha-190114
	I1027 22:09:03.949677  366582 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/ha-190114/id_rsa Username:docker}
	I1027 22:09:04.043844  366582 ssh_runner.go:195] Run: systemctl --version
	I1027 22:09:04.052832  366582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:09:04.076120  366582 kubeconfig.go:125] found "ha-190114" server: "https://192.168.39.254:8443"
	I1027 22:09:04.076162  366582 api_server.go:166] Checking apiserver status ...
	I1027 22:09:04.076201  366582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:09:04.102398  366582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	W1027 22:09:04.117573  366582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:09:04.117649  366582 ssh_runner.go:195] Run: ls
	I1027 22:09:04.124105  366582 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1027 22:09:04.129511  366582 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1027 22:09:04.129540  366582 status.go:463] ha-190114 apiserver status = Running (err=<nil>)
	I1027 22:09:04.129551  366582 status.go:176] ha-190114 status: &{Name:ha-190114 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:09:04.129573  366582 status.go:174] checking status of ha-190114-m02 ...
	I1027 22:09:04.131465  366582 status.go:371] ha-190114-m02 host status = "Stopped" (err=<nil>)
	I1027 22:09:04.131490  366582 status.go:384] host is not running, skipping remaining checks
	I1027 22:09:04.131498  366582 status.go:176] ha-190114-m02 status: &{Name:ha-190114-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:09:04.131523  366582 status.go:174] checking status of ha-190114-m03 ...
	I1027 22:09:04.133013  366582 status.go:371] ha-190114-m03 host status = "Running" (err=<nil>)
	I1027 22:09:04.133031  366582 host.go:66] Checking if "ha-190114-m03" exists ...
	I1027 22:09:04.135190  366582 main.go:143] libmachine: domain ha-190114-m03 has defined MAC address 52:54:00:25:e8:c8 in network mk-ha-190114
	I1027 22:09:04.135719  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:e8:c8", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:04:56 +0000 UTC Type:0 Mac:52:54:00:25:e8:c8 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-190114-m03 Clientid:01:52:54:00:25:e8:c8}
	I1027 22:09:04.135746  366582 main.go:143] libmachine: domain ha-190114-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:e8:c8 in network mk-ha-190114
	I1027 22:09:04.135919  366582 host.go:66] Checking if "ha-190114-m03" exists ...
	I1027 22:09:04.136117  366582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:09:04.138522  366582 main.go:143] libmachine: domain ha-190114-m03 has defined MAC address 52:54:00:25:e8:c8 in network mk-ha-190114
	I1027 22:09:04.139035  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:e8:c8", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:04:56 +0000 UTC Type:0 Mac:52:54:00:25:e8:c8 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-190114-m03 Clientid:01:52:54:00:25:e8:c8}
	I1027 22:09:04.139072  366582 main.go:143] libmachine: domain ha-190114-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:e8:c8 in network mk-ha-190114
	I1027 22:09:04.139239  366582 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/ha-190114-m03/id_rsa Username:docker}
	I1027 22:09:04.232988  366582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:09:04.255534  366582 kubeconfig.go:125] found "ha-190114" server: "https://192.168.39.254:8443"
	I1027 22:09:04.255614  366582 api_server.go:166] Checking apiserver status ...
	I1027 22:09:04.255664  366582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:09:04.279870  366582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1762/cgroup
	W1027 22:09:04.294167  366582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:09:04.294246  366582 ssh_runner.go:195] Run: ls
	I1027 22:09:04.299948  366582 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1027 22:09:04.305078  366582 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1027 22:09:04.305107  366582 status.go:463] ha-190114-m03 apiserver status = Running (err=<nil>)
	I1027 22:09:04.305116  366582 status.go:176] ha-190114-m03 status: &{Name:ha-190114-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:09:04.305133  366582 status.go:174] checking status of ha-190114-m04 ...
	I1027 22:09:04.306722  366582 status.go:371] ha-190114-m04 host status = "Running" (err=<nil>)
	I1027 22:09:04.306743  366582 host.go:66] Checking if "ha-190114-m04" exists ...
	I1027 22:09:04.309061  366582 main.go:143] libmachine: domain ha-190114-m04 has defined MAC address 52:54:00:98:1e:e1 in network mk-ha-190114
	I1027 22:09:04.309437  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:98:1e:e1", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:06:58 +0000 UTC Type:0 Mac:52:54:00:98:1e:e1 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-190114-m04 Clientid:01:52:54:00:98:1e:e1}
	I1027 22:09:04.309462  366582 main.go:143] libmachine: domain ha-190114-m04 has defined IP address 192.168.39.211 and MAC address 52:54:00:98:1e:e1 in network mk-ha-190114
	I1027 22:09:04.309600  366582 host.go:66] Checking if "ha-190114-m04" exists ...
	I1027 22:09:04.309848  366582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:09:04.311750  366582 main.go:143] libmachine: domain ha-190114-m04 has defined MAC address 52:54:00:98:1e:e1 in network mk-ha-190114
	I1027 22:09:04.312090  366582 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:98:1e:e1", ip: ""} in network mk-ha-190114: {Iface:virbr1 ExpiryTime:2025-10-27 23:06:58 +0000 UTC Type:0 Mac:52:54:00:98:1e:e1 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-190114-m04 Clientid:01:52:54:00:98:1e:e1}
	I1027 22:09:04.312122  366582 main.go:143] libmachine: domain ha-190114-m04 has defined IP address 192.168.39.211 and MAC address 52:54:00:98:1e:e1 in network mk-ha-190114
	I1027 22:09:04.312272  366582 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/ha-190114-m04/id_rsa Username:docker}
	I1027 22:09:04.400767  366582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:09:04.423144  366582 status.go:176] ha-190114-m04 status: &{Name:ha-190114-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node start m02 --alsologtostderr -v 5
E1027 22:09:35.217680  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 node start m02 --alsologtostderr -v 5: (42.524770844s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.008817336s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (388.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 stop --alsologtostderr -v 5
E1027 22:11:51.354539  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:11:58.686328  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:12:19.060073  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:13:21.755936  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 stop --alsologtostderr -v 5: (4m9.01066779s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 start --wait true --alsologtostderr -v 5: (2m19.436466055s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (388.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 node delete m03 --alsologtostderr -v 5: (17.513702444s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (236.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 stop --alsologtostderr -v 5
E1027 22:16:51.354671  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:58.680491  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 stop --alsologtostderr -v 5: (3m55.934330009s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5: exit status 7 (70.636309ms)

                                                
                                                
-- stdout --
	ha-190114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-190114-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-190114-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:20:32.854810  370285 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:20:32.855118  370285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.855129  370285 out.go:374] Setting ErrFile to fd 2...
	I1027 22:20:32.855134  370285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.855365  370285 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:20:32.855625  370285 out.go:368] Setting JSON to false
	I1027 22:20:32.855659  370285 mustload.go:66] Loading cluster: ha-190114
	I1027 22:20:32.855710  370285 notify.go:221] Checking for updates...
	I1027 22:20:32.856138  370285 config.go:182] Loaded profile config "ha-190114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.856158  370285 status.go:174] checking status of ha-190114 ...
	I1027 22:20:32.858148  370285 status.go:371] ha-190114 host status = "Stopped" (err=<nil>)
	I1027 22:20:32.858166  370285 status.go:384] host is not running, skipping remaining checks
	I1027 22:20:32.858172  370285 status.go:176] ha-190114 status: &{Name:ha-190114 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:20:32.858190  370285 status.go:174] checking status of ha-190114-m02 ...
	I1027 22:20:32.859496  370285 status.go:371] ha-190114-m02 host status = "Stopped" (err=<nil>)
	I1027 22:20:32.859513  370285 status.go:384] host is not running, skipping remaining checks
	I1027 22:20:32.859518  370285 status.go:176] ha-190114-m02 status: &{Name:ha-190114-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:20:32.859533  370285 status.go:174] checking status of ha-190114-m04 ...
	I1027 22:20:32.860680  370285 status.go:371] ha-190114-m04 host status = "Stopped" (err=<nil>)
	I1027 22:20:32.860695  370285 status.go:384] host is not running, skipping remaining checks
	I1027 22:20:32.860700  370285 status.go:176] ha-190114-m04 status: &{Name:ha-190114-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (236.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1027 22:21:51.354882  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:21:58.680390  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m48.453777809s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (109.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 node add --control-plane --alsologtostderr -v 5
E1027 22:23:14.424292  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-190114 node add --control-plane --alsologtostderr -v 5: (1m17.592303088s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-190114 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-182180 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-182180 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.955205362s)
--- PASS: TestJSONOutput/start/Command (88.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-182180 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-182180 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-182180 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-182180 --output=json --user=testUser: (7.051203574s)
--- PASS: TestJSONOutput/stop/Command (7.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-802926 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-802926 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.722703ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d4ce6328-7e4b-4e0c-8a62-4be347b140e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-802926] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf14017c-a892-4314-860a-8f9d2fc0c18a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"a09fece9-aaac-4f97-a677-b9436d70915a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"667b136c-8892-4c93-a43e-c034417de30c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig"}}
	{"specversion":"1.0","id":"74301184-0743-4dfe-985d-1b1f137507ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube"}}
	{"specversion":"1.0","id":"c7648dc3-3cd7-483f-be6f-1ccbf02c5cce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"df1b09d5-0b1b-4123-bbb0-91a297f82457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae41662e-91c9-4f6d-9e20-4c3c0f2675ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-802926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-802926
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (86.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-096382 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-096382 --driver=kvm2  --container-runtime=crio: (41.066640334s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-100131 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-100131 --driver=kvm2  --container-runtime=crio: (42.533180211s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-096382
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-100131
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-100131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-100131
helpers_test.go:175: Cleaning up "first-096382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-096382
--- PASS: TestMinikubeProfile (86.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-368309 --memory=3072 --mount-string /tmp/TestMountStartserial3971442213/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1027 22:26:51.354396  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:26:58.687191  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-368309 --memory=3072 --mount-string /tmp/TestMountStartserial3971442213/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.201032228s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-368309 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-368309 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-390393 --memory=3072 --mount-string /tmp/TestMountStartserial3971442213/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-390393 --memory=3072 --mount-string /tmp/TestMountStartserial3971442213/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.329148233s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-368309 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-390393
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-390393: (1.37733746s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-390393
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-390393: (20.44091636s)
--- PASS: TestMountStart/serial/RestartStopped (21.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-390393 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451958 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451958 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.753769845s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-451958 -- rollout status deployment/busybox: (2.674347973s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-59zzb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-9wpkn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-59zzb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-9wpkn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-59zzb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-9wpkn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-59zzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-59zzb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-9wpkn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451958 -- exec busybox-7b57f96db7-9wpkn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-451958 -v=5 --alsologtostderr
E1027 22:30:01.757960  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-451958 -v=5 --alsologtostderr: (43.391916194s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-451958 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp testdata/cp-test.txt multinode-451958:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile407735647/001/cp-test_multinode-451958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958:/home/docker/cp-test.txt multinode-451958-m02:/home/docker/cp-test_multinode-451958_multinode-451958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test_multinode-451958_multinode-451958-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958:/home/docker/cp-test.txt multinode-451958-m03:/home/docker/cp-test_multinode-451958_multinode-451958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test_multinode-451958_multinode-451958-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp testdata/cp-test.txt multinode-451958-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile407735647/001/cp-test_multinode-451958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m02:/home/docker/cp-test.txt multinode-451958:/home/docker/cp-test_multinode-451958-m02_multinode-451958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test_multinode-451958-m02_multinode-451958.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m02:/home/docker/cp-test.txt multinode-451958-m03:/home/docker/cp-test_multinode-451958-m02_multinode-451958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test_multinode-451958-m02_multinode-451958-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp testdata/cp-test.txt multinode-451958-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile407735647/001/cp-test_multinode-451958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m03:/home/docker/cp-test.txt multinode-451958:/home/docker/cp-test_multinode-451958-m03_multinode-451958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958 "sudo cat /home/docker/cp-test_multinode-451958-m03_multinode-451958.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 cp multinode-451958-m03:/home/docker/cp-test.txt multinode-451958-m02:/home/docker/cp-test_multinode-451958-m03_multinode-451958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 ssh -n multinode-451958-m02 "sudo cat /home/docker/cp-test_multinode-451958-m03_multinode-451958-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-451958 node stop m03: (1.764040249s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451958 status: exit status 7 (344.076825ms)

                                                
                                                
-- stdout --
	multinode-451958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr: exit status 7 (353.098086ms)

                                                
                                                
-- stdout --
	multinode-451958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:30:49.429850  376055 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:30:49.430144  376055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:30:49.430154  376055 out.go:374] Setting ErrFile to fd 2...
	I1027 22:30:49.430158  376055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:30:49.430381  376055 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:30:49.430546  376055 out.go:368] Setting JSON to false
	I1027 22:30:49.430593  376055 mustload.go:66] Loading cluster: multinode-451958
	I1027 22:30:49.430736  376055 notify.go:221] Checking for updates...
	I1027 22:30:49.431081  376055 config.go:182] Loaded profile config "multinode-451958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:30:49.431102  376055 status.go:174] checking status of multinode-451958 ...
	I1027 22:30:49.433321  376055 status.go:371] multinode-451958 host status = "Running" (err=<nil>)
	I1027 22:30:49.433344  376055 host.go:66] Checking if "multinode-451958" exists ...
	I1027 22:30:49.435837  376055 main.go:143] libmachine: domain multinode-451958 has defined MAC address 52:54:00:40:28:93 in network mk-multinode-451958
	I1027 22:30:49.436262  376055 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:28:93", ip: ""} in network mk-multinode-451958: {Iface:virbr1 ExpiryTime:2025-10-27 23:28:20 +0000 UTC Type:0 Mac:52:54:00:40:28:93 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-451958 Clientid:01:52:54:00:40:28:93}
	I1027 22:30:49.436290  376055 main.go:143] libmachine: domain multinode-451958 has defined IP address 192.168.39.167 and MAC address 52:54:00:40:28:93 in network mk-multinode-451958
	I1027 22:30:49.436410  376055 host.go:66] Checking if "multinode-451958" exists ...
	I1027 22:30:49.436615  376055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:30:49.439453  376055 main.go:143] libmachine: domain multinode-451958 has defined MAC address 52:54:00:40:28:93 in network mk-multinode-451958
	I1027 22:30:49.439819  376055 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:28:93", ip: ""} in network mk-multinode-451958: {Iface:virbr1 ExpiryTime:2025-10-27 23:28:20 +0000 UTC Type:0 Mac:52:54:00:40:28:93 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-451958 Clientid:01:52:54:00:40:28:93}
	I1027 22:30:49.439845  376055 main.go:143] libmachine: domain multinode-451958 has defined IP address 192.168.39.167 and MAC address 52:54:00:40:28:93 in network mk-multinode-451958
	I1027 22:30:49.440020  376055 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/multinode-451958/id_rsa Username:docker}
	I1027 22:30:49.523945  376055 ssh_runner.go:195] Run: systemctl --version
	I1027 22:30:49.530973  376055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:30:49.550201  376055 kubeconfig.go:125] found "multinode-451958" server: "https://192.168.39.167:8443"
	I1027 22:30:49.550247  376055 api_server.go:166] Checking apiserver status ...
	I1027 22:30:49.550293  376055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:30:49.572450  376055 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup
	W1027 22:30:49.587450  376055 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:30:49.587535  376055 ssh_runner.go:195] Run: ls
	I1027 22:30:49.593210  376055 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1027 22:30:49.599787  376055 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1027 22:30:49.599834  376055 status.go:463] multinode-451958 apiserver status = Running (err=<nil>)
	I1027 22:30:49.599848  376055 status.go:176] multinode-451958 status: &{Name:multinode-451958 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:30:49.599877  376055 status.go:174] checking status of multinode-451958-m02 ...
	I1027 22:30:49.601667  376055 status.go:371] multinode-451958-m02 host status = "Running" (err=<nil>)
	I1027 22:30:49.601695  376055 host.go:66] Checking if "multinode-451958-m02" exists ...
	I1027 22:30:49.604576  376055 main.go:143] libmachine: domain multinode-451958-m02 has defined MAC address 52:54:00:2b:8e:aa in network mk-multinode-451958
	I1027 22:30:49.605252  376055 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:8e:aa", ip: ""} in network mk-multinode-451958: {Iface:virbr1 ExpiryTime:2025-10-27 23:29:20 +0000 UTC Type:0 Mac:52:54:00:2b:8e:aa Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-451958-m02 Clientid:01:52:54:00:2b:8e:aa}
	I1027 22:30:49.605289  376055 main.go:143] libmachine: domain multinode-451958-m02 has defined IP address 192.168.39.247 and MAC address 52:54:00:2b:8e:aa in network mk-multinode-451958
	I1027 22:30:49.605491  376055 host.go:66] Checking if "multinode-451958-m02" exists ...
	I1027 22:30:49.605803  376055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:30:49.608209  376055 main.go:143] libmachine: domain multinode-451958-m02 has defined MAC address 52:54:00:2b:8e:aa in network mk-multinode-451958
	I1027 22:30:49.608701  376055 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:8e:aa", ip: ""} in network mk-multinode-451958: {Iface:virbr1 ExpiryTime:2025-10-27 23:29:20 +0000 UTC Type:0 Mac:52:54:00:2b:8e:aa Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-451958-m02 Clientid:01:52:54:00:2b:8e:aa}
	I1027 22:30:49.608730  376055 main.go:143] libmachine: domain multinode-451958-m02 has defined IP address 192.168.39.247 and MAC address 52:54:00:2b:8e:aa in network mk-multinode-451958
	I1027 22:30:49.608936  376055 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21790-352679/.minikube/machines/multinode-451958-m02/id_rsa Username:docker}
	I1027 22:30:49.695113  376055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:30:49.712813  376055 status.go:176] multinode-451958-m02 status: &{Name:multinode-451958-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:30:49.712870  376055 status.go:174] checking status of multinode-451958-m03 ...
	I1027 22:30:49.714935  376055 status.go:371] multinode-451958-m03 host status = "Stopped" (err=<nil>)
	I1027 22:30:49.714963  376055 status.go:384] host is not running, skipping remaining checks
	I1027 22:30:49.714973  376055 status.go:176] multinode-451958-m03 status: &{Name:multinode-451958-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-451958 node start m03 -v=5 --alsologtostderr: (45.612946539s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (46.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (299.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451958
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-451958
E1027 22:31:51.354755  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:31:58.683977  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-451958: (2m48.390044917s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451958 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451958 --wait=true -v=5 --alsologtostderr: (2m11.259456065s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451958
--- PASS: TestMultiNode/serial/RestartKeepsNodes (299.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-451958 node delete m03: (2.458088903s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (159.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 stop
E1027 22:36:51.354507  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:36:58.680475  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-451958 stop: (2m39.621296002s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451958 status: exit status 7 (70.746364ms)

                                                
                                                
-- stdout --
	multinode-451958
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451958-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr: exit status 7 (68.151233ms)

                                                
                                                
-- stdout --
	multinode-451958
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451958-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:39:18.355520  378425 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:18.355791  378425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:18.355802  378425 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:18.355808  378425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:18.356055  378425 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:39:18.356248  378425 out.go:368] Setting JSON to false
	I1027 22:39:18.356284  378425 mustload.go:66] Loading cluster: multinode-451958
	I1027 22:39:18.356391  378425 notify.go:221] Checking for updates...
	I1027 22:39:18.356719  378425 config.go:182] Loaded profile config "multinode-451958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:18.356737  378425 status.go:174] checking status of multinode-451958 ...
	I1027 22:39:18.359110  378425 status.go:371] multinode-451958 host status = "Stopped" (err=<nil>)
	I1027 22:39:18.359130  378425 status.go:384] host is not running, skipping remaining checks
	I1027 22:39:18.359138  378425 status.go:176] multinode-451958 status: &{Name:multinode-451958 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:39:18.359162  378425 status.go:174] checking status of multinode-451958-m02 ...
	I1027 22:39:18.360645  378425 status.go:371] multinode-451958-m02 host status = "Stopped" (err=<nil>)
	I1027 22:39:18.360663  378425 status.go:384] host is not running, skipping remaining checks
	I1027 22:39:18.360671  378425 status.go:176] multinode-451958-m02 status: &{Name:multinode-451958-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (159.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451958 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1027 22:39:54.425883  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451958 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m29.352228235s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451958 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451958
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451958-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-451958-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (88.172866ms)

                                                
                                                
-- stdout --
	* [multinode-451958-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-451958-m02' is duplicated with machine name 'multinode-451958-m02' in profile 'multinode-451958'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451958-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451958-m03 --driver=kvm2  --container-runtime=crio: (40.406134383s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-451958
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-451958: exit status 80 (222.123438ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-451958 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-451958-m03 already exists in multinode-451958-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-451958-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.64s)

                                                
                                    
x
+
TestScheduledStopUnix (113.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-173832 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-173832 --memory=3072 --driver=kvm2  --container-runtime=crio: (41.485092333s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-173832 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-173832 -n scheduled-stop-173832
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-173832 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 22:44:51.188623  356621 retry.go:31] will retry after 147.533µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.189822  356621 retry.go:31] will retry after 98.189µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.191070  356621 retry.go:31] will retry after 218.034µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.192259  356621 retry.go:31] will retry after 178.77µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.193424  356621 retry.go:31] will retry after 485.695µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.194569  356621 retry.go:31] will retry after 830.733µs: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.195725  356621 retry.go:31] will retry after 1.452749ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.197950  356621 retry.go:31] will retry after 1.150751ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.200241  356621 retry.go:31] will retry after 2.866309ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.203499  356621 retry.go:31] will retry after 2.631535ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.206797  356621 retry.go:31] will retry after 4.109121ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.212047  356621 retry.go:31] will retry after 7.556945ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.220318  356621 retry.go:31] will retry after 19.114484ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.239646  356621 retry.go:31] will retry after 14.988232ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.254991  356621 retry.go:31] will retry after 41.829742ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
I1027 22:44:51.297638  356621 retry.go:31] will retry after 45.49672ms: open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/scheduled-stop-173832/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-173832 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-173832 -n scheduled-stop-173832
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-173832
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-173832 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-173832
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-173832: exit status 7 (69.094477ms)

                                                
                                                
-- stdout --
	scheduled-stop-173832
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-173832 -n scheduled-stop-173832
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-173832 -n scheduled-stop-173832: exit status 7 (65.845041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-173832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-173832
--- PASS: TestScheduledStopUnix (113.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (147.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3414486762 start -p running-upgrade-977671 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1027 22:46:41.759484  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:46:51.353944  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:46:58.680053  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3414486762 start -p running-upgrade-977671 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m48.018564738s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-977671 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-977671 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.350447003s)
helpers_test.go:175: Cleaning up "running-upgrade-977671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-977671
--- PASS: TestRunningBinaryUpgrade (147.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (104.781305ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-830800] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (106.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-135059 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-135059 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.867675953s)
--- PASS: TestPause/serial/Start (106.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (87.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-830800 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-830800 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.800339115s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-830800 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (87.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (30.898664653s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-830800 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-830800 status -o json: exit status 2 (224.752616ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-830800","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-830800
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-830800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (36.819317572s)
--- PASS: TestNoKubernetes/serial/Start (36.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-561731 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-561731 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (143.630149ms)

                                                
                                                
-- stdout --
	* [false-561731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:48:18.541198  383504 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:48:18.541307  383504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:18.541314  383504 out.go:374] Setting ErrFile to fd 2...
	I1027 22:48:18.541320  383504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:48:18.541534  383504 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-352679/.minikube/bin
	I1027 22:48:18.542092  383504 out.go:368] Setting JSON to false
	I1027 22:48:18.543508  383504 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9046,"bootTime":1761596253,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:48:18.543652  383504 start.go:143] virtualization: kvm guest
	I1027 22:48:18.545815  383504 out.go:179] * [false-561731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:48:18.547439  383504 notify.go:221] Checking for updates...
	I1027 22:48:18.547458  383504 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:48:18.548818  383504 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:48:18.550167  383504 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-352679/kubeconfig
	I1027 22:48:18.551591  383504 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-352679/.minikube
	I1027 22:48:18.552942  383504 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:48:18.554198  383504 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:48:18.555872  383504 config.go:182] Loaded profile config "NoKubernetes-830800": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 22:48:18.556067  383504 config.go:182] Loaded profile config "pause-135059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:48:18.556180  383504 config.go:182] Loaded profile config "running-upgrade-977671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:48:18.556308  383504 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:48:18.599421  383504 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 22:48:18.600585  383504 start.go:307] selected driver: kvm2
	I1027 22:48:18.600628  383504 start.go:928] validating driver "kvm2" against <nil>
	I1027 22:48:18.600663  383504 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:48:18.603054  383504 out.go:203] 
	W1027 22:48:18.604194  383504 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 22:48:18.605718  383504 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-561731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.114:8443
name: pause-135059
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.83:8443
name: running-upgrade-977671
contexts:
- context:
cluster: pause-135059
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-135059
name: pause-135059
- context:
cluster: running-upgrade-977671
user: running-upgrade-977671
name: running-upgrade-977671
current-context: running-upgrade-977671
kind: Config
users:
- name: pause-135059
user:
client-certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.crt
client-key: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.key
- name: running-upgrade-977671
user:
client-certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/running-upgrade-977671/client.crt
client-key: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/running-upgrade-977671/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-561731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-561731"

                                                
                                                
----------------------- debugLogs end: false-561731 [took: 4.582850647s] --------------------------------
helpers_test.go:175: Cleaning up "false-561731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-561731
--- PASS: TestNetworkPlugins/group/false (4.92s)

                                                
                                    
x
+
TestISOImage/Setup (33.31s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p guest-734990 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p guest-734990 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.31088938s)
--- PASS: TestISOImage/Setup (33.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-830800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-830800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.275326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-830800
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-830800: (1.393333363s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (80.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-830800 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-830800 --driver=kvm2  --container-runtime=crio: (1m20.339900217s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (80.34s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-195196 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-195196 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (2m34.218719674s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-830800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-830800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (170.070896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (147.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-734404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-734404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (2m27.564770983s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (147.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-195196 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [549143e7-7eb4-4579-ac8f-f4340b9b23d9] Pending
helpers_test.go:352: "busybox" [549143e7-7eb4-4579-ac8f-f4340b9b23d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [549143e7-7eb4-4579-ac8f-f4340b9b23d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004978118s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-195196 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-195196 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-195196 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.28254673s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-195196 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-195196 --alsologtostderr -v=3
E1027 22:51:51.354433  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:51:58.680630  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-195196 --alsologtostderr -v=3: (1m21.939856159s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-734404 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6c83e189-4143-4223-8d87-2e1aff49cf0f] Pending
helpers_test.go:352: "busybox" [6c83e189-4143-4223-8d87-2e1aff49cf0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6c83e189-4143-4223-8d87-2e1aff49cf0f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005015364s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-734404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-734404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-734404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092959733s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-734404 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-734404 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-734404 --alsologtostderr -v=3: (1m29.868214571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174790 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-174790 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.365876449s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-195196 -n old-k8s-version-195196
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-195196 -n old-k8s-version-195196: exit status 7 (66.394288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-195196 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (63.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-195196 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-195196 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.503985393s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-195196 -n old-k8s-version-195196
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (63.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174790 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2879129d-f5e7-4c08-8197-9e6d5ff177f7] Pending
helpers_test.go:352: "busybox" [2879129d-f5e7-4c08-8197-9e6d5ff177f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2879129d-f5e7-4c08-8197-9e6d5ff177f7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006374572s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174790 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vt88c" [cc39432b-2f54-44d9-9b6b-247d95ce6dcb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vt88c" [cc39432b-2f54-44d9-9b6b-247d95ce6dcb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004788839s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-734404 -n no-preload-734404
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-734404 -n no-preload-734404: exit status 7 (75.025414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-734404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-734404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-734404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.202437087s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-734404 -n no-preload-734404
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-174790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-174790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314174216s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-174790 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-174790 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-174790 --alsologtostderr -v=3: (1m26.634967736s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vt88c" [cc39432b-2f54-44d9-9b6b-247d95ce6dcb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00467257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-195196 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-195196 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-195196 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-195196 -n old-k8s-version-195196
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-195196 -n old-k8s-version-195196: exit status 2 (246.926919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-195196 -n old-k8s-version-195196
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-195196 -n old-k8s-version-195196: exit status 2 (234.384165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-195196 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-195196 -n old-k8s-version-195196
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-195196 -n old-k8s-version-195196
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5qwdq" [95aeb3fe-b543-4251-8d08-d89b16d64d6c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5qwdq" [95aeb3fe-b543-4251-8d08-d89b16d64d6c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.026529727s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5qwdq" [95aeb3fe-b543-4251-8d08-d89b16d64d6c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005214189s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-734404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-320488 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-320488 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.887485989s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-734404 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-734404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-734404 -n no-preload-734404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-734404 -n no-preload-734404: exit status 2 (244.767885ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-734404 -n no-preload-734404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-734404 -n no-preload-734404: exit status 2 (244.002204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-734404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-734404 -n no-preload-734404
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-734404 -n no-preload-734404
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1945483874 start -p stopped-upgrade-096634 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1945483874 start -p stopped-upgrade-096634 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (59.508788885s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1945483874 -p stopped-upgrade-096634 stop
E1027 22:56:35.801578  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:35.808074  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:35.819859  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:35.842095  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:35.883779  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:35.965996  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:36.127413  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:36.449511  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:37.091133  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:38.373157  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1945483874 -p stopped-upgrade-096634 stop: (3.278061614s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-096634 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1027 22:56:40.935265  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:56:46.057255  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-096634 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.559143923s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174790 -n embed-certs-174790
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174790 -n embed-certs-174790: exit status 7 (66.282211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-174790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (75.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174790 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 22:56:34.429508  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-174790 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.291645724s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174790 -n embed-certs-174790
E1027 22:56:58.680574  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (75.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-320488 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d5036dc-6dac-4d42-8d4e-d70cb87c30ee] Pending
helpers_test.go:352: "busybox" [8d5036dc-6dac-4d42-8d4e-d70cb87c30ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1027 22:56:51.354131  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [8d5036dc-6dac-4d42-8d4e-d70cb87c30ee] Running
E1027 22:56:56.298654  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00584802s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-320488 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fn9dc" [3e33b451-c604-4dad-a3cb-dac80d8f3b42] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.191206277s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-320488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-320488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.537219972s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-320488 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (85.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-320488 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-320488 --alsologtostderr -v=3: (1m25.022812303s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (85.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fn9dc" [3e33b451-c604-4dad-a3cb-dac80d8f3b42] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004050621s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-174790 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-174790 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-174790 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-174790 --alsologtostderr -v=1: (1.021989963s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174790 -n embed-certs-174790
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174790 -n embed-certs-174790: exit status 2 (279.031325ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174790 -n embed-certs-174790
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174790 -n embed-certs-174790: exit status 2 (265.721668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-174790 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174790 -n embed-certs-174790
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174790 -n embed-certs-174790
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-845639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 22:57:16.780551  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-845639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (49.811565275s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-096634
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-096634: (1.278595923s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1027 22:57:32.627405  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.633852  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.645401  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.666941  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.708375  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.789940  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:32.951558  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:33.273333  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:33.915452  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:35.197490  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:37.759580  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:42.880934  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:53.122527  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:57:57.742729  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.258529389s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-845639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-845639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.126942955s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-845639 --alsologtostderr -v=3
E1027 22:58:13.604336  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-845639 --alsologtostderr -v=3: (10.474156724s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-845639 -n newest-cni-845639
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-845639 -n newest-cni-845639: exit status 7 (79.26364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-845639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-845639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-845639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (37.54351474s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-845639 -n newest-cni-845639
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488: exit status 7 (80.525414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-320488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-320488 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 22:58:54.566309  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-320488 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (57.861244149s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-845639 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-845639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-845639 -n newest-cni-845639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-845639 -n newest-cni-845639: exit status 2 (269.467941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-845639 -n newest-cni-845639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-845639 -n newest-cni-845639: exit status 2 (275.417763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-845639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-845639 -n newest-cni-845639
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-845639 -n newest-cni-845639
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m34.252358839s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-561731 "pgrep -a kubelet"
I1027 22:59:04.153132  356621 config.go:182] Loaded profile config "auto-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2pngm" [e29571c7-dff3-4cb1-9c29-63fac17c569f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2pngm" [e29571c7-dff3-4cb1-9c29-63fac17c569f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006378419s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-645kv" [acce223c-fb2e-454f-922a-cb6d8e933bb8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-645kv" [acce223c-fb2e-454f-922a-cb6d8e933bb8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005643404s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m14.764578552s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-645kv" [acce223c-fb2e-454f-922a-cb6d8e933bb8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006122324s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-320488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-320488 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-320488 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-320488 --alsologtostderr -v=1: (1.185292249s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488: exit status 2 (294.481811ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488: exit status 2 (268.148487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-320488 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-320488 -n default-k8s-diff-port-320488
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1027 23:00:16.488023  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.066700081s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hf6kw" [d42ec122-5358-4fee-8d3e-5024f420d9dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004647555s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-561731 "pgrep -a kubelet"
I1027 23:00:41.182983  356621 config.go:182] Loaded profile config "kindnet-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-96r2g" [e1f7ba4d-81cf-417b-925a-f12db9ba9787] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-96r2g" [e1f7ba4d-81cf-417b-925a-f12db9ba9787] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004137567s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fss4v" [bca0401d-b390-4e15-ab93-55f5a8911091] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-fss4v" [bca0401d-b390-4e15-ab93-55f5a8911091] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008198775s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-561731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I1027 23:00:52.901965  356621 config.go:182] Loaded profile config "calico-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ck7x2" [b40d2d84-87f8-4f49-a6d0-ff27430e71fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ck7x2" [b40d2d84-87f8-4f49-a6d0-ff27430e71fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005367349s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-561731 "pgrep -a kubelet"
I1027 23:01:07.212976  356621 config.go:182] Loaded profile config "custom-flannel-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gkql9" [92381f81-1afa-4ba0-bc01-fadeb2f131d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gkql9" [92381f81-1afa-4ba0-bc01-fadeb2f131d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005525149s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m26.417751677s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.416781262s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1027 23:01:35.802096  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.084045  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.090682  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.102300  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.123780  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.165312  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.247093  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.408559  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:49.730622  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:50.372519  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:51.354317  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/functional-880510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:51.654137  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:54.215539  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:58.680766  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:01:59.336998  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:02:03.507161  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/old-k8s-version-195196/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:02:09.578513  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:02:30.060020  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:02:32.627263  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-561731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m13.772286367s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-561731 "pgrep -a kubelet"
I1027 23:02:34.323376  356621 config.go:182] Loaded profile config "enable-default-cni-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-28x86" [5159450f-a652-4943-8ebc-ccca75437b25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-28x86" [5159450f-a652-4943-8ebc-ccca75437b25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007234493s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-58z8n" [aeccaada-101a-46a4-81ac-035f69a362ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004659901s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-561731 "pgrep -a kubelet"
I1027 23:02:48.692384  356621 config.go:182] Loaded profile config "bridge-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6rkvx" [ede359c4-952c-4680-8dfd-5648a68a501d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6rkvx" [ede359c4-952c-4680-8dfd-5648a68a501d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004228417s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-561731 "pgrep -a kubelet"
I1027 23:02:51.587003  356621 config.go:182] Loaded profile config "flannel-561731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-561731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6s78j" [c60d6590-6328-499e-98cf-e087a793dac3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6s78j" [c60d6590-6328-499e-98cf-e087a793dac3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005192835s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1027 23:03:00.329908  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/no-preload-734404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-561731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-561731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)
E1027 23:03:21.761604  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/addons-865238/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.514592  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.521121  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.532655  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.554178  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.596061  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.677672  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:04.839382  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:05.161374  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:05.803039  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:07.085041  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:09.646945  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:14.768289  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:25.010065  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:32.944132  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:04:45.492201  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:05:26.453944  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/auto-561731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)
E1027 23:03:11.022060  356621 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/default-k8s-diff-port-320488/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-734990 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    

Test skip (40/342)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestStartStop/group/disable-driver-mounts 0.24
271 TestNetworkPlugins/group/kubenet 4.65
279 TestNetworkPlugins/group/cilium 4.7
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-865238 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-800119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-800119
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-561731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:47:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.114:8443
name: pause-135059
contexts:
- context:
cluster: pause-135059
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:47:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-135059
name: pause-135059
current-context: ""
kind: Config
users:
- name: pause-135059
user:
client-certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.crt
client-key: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-561731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-561731"

                                                
                                                
----------------------- debugLogs end: kubenet-561731 [took: 4.447592825s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-561731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-561731
--- SKIP: TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-561731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-561731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.114:8443
name: pause-135059
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-352679/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.83:8443
name: running-upgrade-977671
contexts:
- context:
cluster: pause-135059
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:48:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-135059
name: pause-135059
- context:
cluster: running-upgrade-977671
user: running-upgrade-977671
name: running-upgrade-977671
current-context: running-upgrade-977671
kind: Config
users:
- name: pause-135059
user:
client-certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.crt
client-key: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/pause-135059/client.key
- name: running-upgrade-977671
user:
client-certificate: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/running-upgrade-977671/client.crt
client-key: /home/jenkins/minikube-integration/21790-352679/.minikube/profiles/running-upgrade-977671/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-561731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-561731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-561731"

                                                
                                                
----------------------- debugLogs end: cilium-561731 [took: 4.488372519s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-561731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-561731
--- SKIP: TestNetworkPlugins/group/cilium (4.70s)

                                                
                                    
Copied to clipboard