Test Report: KVM_Linux_crio 21631

                    
                      b128d3d4cdbb5b7aeeced7d5ab95296ac270db89:2025-10-01:41714
                    
                

Test fail (4/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.92
50 TestCertExpiration 1077.99
244 TestPreload 163.91
261 TestPause/serial/SecondStartNoReconfiguration 76.29
x
+
TestAddons/parallel/Ingress (157.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-289249 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-289249 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-289249 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ffba61f2-ccbf-4f04-a767-abbb659d470d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ffba61f2-ccbf-4f04-a767-abbb659d470d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004741199s
I1001 17:51:44.949962   13469 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-289249 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.235469556s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-289249 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.98
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-289249 -n addons-289249
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 logs -n 25: (1.317570292s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-807514                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-807514 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │ 01 Oct 25 17:47 UTC │
	│ start   │ --download-only -p binary-mirror-971827 --alsologtostderr --binary-mirror http://127.0.0.1:42391 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-971827 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │                     │
	│ delete  │ -p binary-mirror-971827                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-971827 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │ 01 Oct 25 17:47 UTC │
	│ addons  │ disable dashboard -p addons-289249                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-289249                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │                     │
	│ start   │ -p addons-289249 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ enable headlamp -p addons-289249 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-289249                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ ip      │ addons-289249 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ ssh     │ addons-289249 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │                     │
	│ addons  │ addons-289249 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:51 UTC │
	│ addons  │ addons-289249 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:51 UTC │ 01 Oct 25 17:52 UTC │
	│ ssh     │ addons-289249 ssh cat /opt/local-path-provisioner/pvc-f59bf8b6-7ab5-405f-97f9-b1c0ba9ac7a3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:52 UTC │ 01 Oct 25 17:52 UTC │
	│ addons  │ addons-289249 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:52 UTC │ 01 Oct 25 17:52 UTC │
	│ addons  │ addons-289249 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:52 UTC │ 01 Oct 25 17:52 UTC │
	│ addons  │ addons-289249 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:52 UTC │ 01 Oct 25 17:52 UTC │
	│ ip      │ addons-289249 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-289249        │ jenkins │ v1.37.0 │ 01 Oct 25 17:53 UTC │ 01 Oct 25 17:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 17:47:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 17:47:48.492540   14186 out.go:360] Setting OutFile to fd 1 ...
	I1001 17:47:48.492824   14186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:48.492835   14186 out.go:374] Setting ErrFile to fd 2...
	I1001 17:47:48.492842   14186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:48.493171   14186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 17:47:48.493699   14186 out.go:368] Setting JSON to false
	I1001 17:47:48.494465   14186 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1812,"bootTime":1759339056,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 17:47:48.494554   14186 start.go:140] virtualization: kvm guest
	I1001 17:47:48.543074   14186 out.go:179] * [addons-289249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 17:47:48.615946   14186 notify.go:220] Checking for updates...
	I1001 17:47:48.616019   14186 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 17:47:48.617393   14186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 17:47:48.618655   14186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:47:48.619757   14186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:47:48.620975   14186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 17:47:48.622020   14186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 17:47:48.623188   14186 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 17:47:48.651899   14186 out.go:179] * Using the kvm2 driver based on user configuration
	I1001 17:47:48.653090   14186 start.go:304] selected driver: kvm2
	I1001 17:47:48.653106   14186 start.go:921] validating driver "kvm2" against <nil>
	I1001 17:47:48.653118   14186 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 17:47:48.653777   14186 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:48.653839   14186 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:48.666953   14186 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:48.666995   14186 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:48.680273   14186 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:48.680314   14186 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 17:47:48.680664   14186 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 17:47:48.680715   14186 cni.go:84] Creating CNI manager for ""
	I1001 17:47:48.680767   14186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 17:47:48.680782   14186 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 17:47:48.680836   14186 start.go:348] cluster config:
	{Name:addons-289249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-289249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:47:48.680965   14186 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:48.682595   14186 out.go:179] * Starting "addons-289249" primary control-plane node in "addons-289249" cluster
	I1001 17:47:48.683755   14186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 17:47:48.683784   14186 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1001 17:47:48.683793   14186 cache.go:58] Caching tarball of preloaded images
	I1001 17:47:48.683879   14186 preload.go:233] Found /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 17:47:48.683889   14186 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 17:47:48.684157   14186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/config.json ...
	I1001 17:47:48.684181   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/config.json: {Name:mk0a5514eb1084c51ee71dcecda57f3bc03c0cae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:47:48.684302   14186 start.go:360] acquireMachinesLock for addons-289249: {Name:mk9cde4a6dd309a36e894aa2ddacad5574ffdbe7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 17:47:48.684347   14186 start.go:364] duration metric: took 33.554µs to acquireMachinesLock for "addons-289249"
	I1001 17:47:48.684364   14186 start.go:93] Provisioning new machine with config: &{Name:addons-289249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:addons-289249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 17:47:48.684440   14186 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 17:47:48.685820   14186 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1001 17:47:48.685936   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:47:48.685973   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:47:48.698342   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I1001 17:47:48.698830   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:47:48.699315   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:47:48.699338   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:47:48.699771   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:47:48.699950   14186 main.go:141] libmachine: (addons-289249) Calling .GetMachineName
	I1001 17:47:48.700095   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:47:48.700227   14186 start.go:159] libmachine.API.Create for "addons-289249" (driver="kvm2")
	I1001 17:47:48.700252   14186 client.go:168] LocalClient.Create starting
	I1001 17:47:48.700290   14186 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem
	I1001 17:47:49.007358   14186 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem
	I1001 17:47:49.067140   14186 main.go:141] libmachine: Running pre-create checks...
	I1001 17:47:49.067164   14186 main.go:141] libmachine: (addons-289249) Calling .PreCreateCheck
	I1001 17:47:49.067634   14186 main.go:141] libmachine: (addons-289249) Calling .GetConfigRaw
	I1001 17:47:49.068065   14186 main.go:141] libmachine: Creating machine...
	I1001 17:47:49.068078   14186 main.go:141] libmachine: (addons-289249) Calling .Create
	I1001 17:47:49.068236   14186 main.go:141] libmachine: (addons-289249) creating domain...
	I1001 17:47:49.068250   14186 main.go:141] libmachine: (addons-289249) creating network...
	I1001 17:47:49.069750   14186 main.go:141] libmachine: (addons-289249) DBG | found existing default network
	I1001 17:47:49.069898   14186 main.go:141] libmachine: (addons-289249) DBG | <network>
	I1001 17:47:49.069921   14186 main.go:141] libmachine: (addons-289249) DBG |   <name>default</name>
	I1001 17:47:49.069934   14186 main.go:141] libmachine: (addons-289249) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1001 17:47:49.069948   14186 main.go:141] libmachine: (addons-289249) DBG |   <forward mode='nat'>
	I1001 17:47:49.069973   14186 main.go:141] libmachine: (addons-289249) DBG |     <nat>
	I1001 17:47:49.069985   14186 main.go:141] libmachine: (addons-289249) DBG |       <port start='1024' end='65535'/>
	I1001 17:47:49.069993   14186 main.go:141] libmachine: (addons-289249) DBG |     </nat>
	I1001 17:47:49.070007   14186 main.go:141] libmachine: (addons-289249) DBG |   </forward>
	I1001 17:47:49.070020   14186 main.go:141] libmachine: (addons-289249) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1001 17:47:49.070066   14186 main.go:141] libmachine: (addons-289249) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1001 17:47:49.070092   14186 main.go:141] libmachine: (addons-289249) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1001 17:47:49.070109   14186 main.go:141] libmachine: (addons-289249) DBG |     <dhcp>
	I1001 17:47:49.070136   14186 main.go:141] libmachine: (addons-289249) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1001 17:47:49.070159   14186 main.go:141] libmachine: (addons-289249) DBG |     </dhcp>
	I1001 17:47:49.070175   14186 main.go:141] libmachine: (addons-289249) DBG |   </ip>
	I1001 17:47:49.070186   14186 main.go:141] libmachine: (addons-289249) DBG | </network>
	I1001 17:47:49.070199   14186 main.go:141] libmachine: (addons-289249) DBG | 
	I1001 17:47:49.070605   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:49.070425   14214 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1001 17:47:49.070656   14186 main.go:141] libmachine: (addons-289249) DBG | defining private network:
	I1001 17:47:49.070679   14186 main.go:141] libmachine: (addons-289249) DBG | 
	I1001 17:47:49.070689   14186 main.go:141] libmachine: (addons-289249) DBG | <network>
	I1001 17:47:49.070700   14186 main.go:141] libmachine: (addons-289249) DBG |   <name>mk-addons-289249</name>
	I1001 17:47:49.070708   14186 main.go:141] libmachine: (addons-289249) DBG |   <dns enable='no'/>
	I1001 17:47:49.070720   14186 main.go:141] libmachine: (addons-289249) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 17:47:49.070732   14186 main.go:141] libmachine: (addons-289249) DBG |     <dhcp>
	I1001 17:47:49.070749   14186 main.go:141] libmachine: (addons-289249) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 17:47:49.070762   14186 main.go:141] libmachine: (addons-289249) DBG |     </dhcp>
	I1001 17:47:49.070776   14186 main.go:141] libmachine: (addons-289249) DBG |   </ip>
	I1001 17:47:49.070788   14186 main.go:141] libmachine: (addons-289249) DBG | </network>
	I1001 17:47:49.070796   14186 main.go:141] libmachine: (addons-289249) DBG | 
	I1001 17:47:49.076682   14186 main.go:141] libmachine: (addons-289249) DBG | creating private network mk-addons-289249 192.168.39.0/24...
	I1001 17:47:49.139290   14186 main.go:141] libmachine: (addons-289249) DBG | private network mk-addons-289249 192.168.39.0/24 created
	I1001 17:47:49.139567   14186 main.go:141] libmachine: (addons-289249) DBG | <network>
	I1001 17:47:49.139588   14186 main.go:141] libmachine: (addons-289249) DBG |   <name>mk-addons-289249</name>
	I1001 17:47:49.139599   14186 main.go:141] libmachine: (addons-289249) setting up store path in /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249 ...
	I1001 17:47:49.139629   14186 main.go:141] libmachine: (addons-289249) DBG |   <uuid>5f6601aa-76b5-4535-bdbc-4797e42d1696</uuid>
	I1001 17:47:49.139654   14186 main.go:141] libmachine: (addons-289249) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1001 17:47:49.139672   14186 main.go:141] libmachine: (addons-289249) building disk image from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1001 17:47:49.139696   14186 main.go:141] libmachine: (addons-289249) Downloading /home/jenkins/minikube-integration/21631-9542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1001 17:47:49.139721   14186 main.go:141] libmachine: (addons-289249) DBG |   <mac address='52:54:00:17:53:8c'/>
	I1001 17:47:49.139735   14186 main.go:141] libmachine: (addons-289249) DBG |   <dns enable='no'/>
	I1001 17:47:49.139747   14186 main.go:141] libmachine: (addons-289249) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 17:47:49.139765   14186 main.go:141] libmachine: (addons-289249) DBG |     <dhcp>
	I1001 17:47:49.139777   14186 main.go:141] libmachine: (addons-289249) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 17:47:49.139790   14186 main.go:141] libmachine: (addons-289249) DBG |     </dhcp>
	I1001 17:47:49.139798   14186 main.go:141] libmachine: (addons-289249) DBG |   </ip>
	I1001 17:47:49.139807   14186 main.go:141] libmachine: (addons-289249) DBG | </network>
	I1001 17:47:49.139816   14186 main.go:141] libmachine: (addons-289249) DBG | 
	I1001 17:47:49.139832   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:49.139537   14214 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:47:49.415821   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:49.415670   14214 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa...
	I1001 17:47:49.986382   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:49.986249   14214 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/addons-289249.rawdisk...
	I1001 17:47:49.986404   14186 main.go:141] libmachine: (addons-289249) DBG | Writing magic tar header
	I1001 17:47:49.986499   14186 main.go:141] libmachine: (addons-289249) DBG | Writing SSH key tar header
	I1001 17:47:49.986525   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249 (perms=drwx------)
	I1001 17:47:49.986538   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:49.986371   14214 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249 ...
	I1001 17:47:49.986552   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249
	I1001 17:47:49.986559   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines
	I1001 17:47:49.986604   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines (perms=drwxr-xr-x)
	I1001 17:47:49.986614   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:47:49.986621   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube (perms=drwxr-xr-x)
	I1001 17:47:49.986634   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins/minikube-integration/21631-9542 (perms=drwxrwxr-x)
	I1001 17:47:49.986640   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 17:47:49.986646   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542
	I1001 17:47:49.986654   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1001 17:47:49.986662   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home/jenkins
	I1001 17:47:49.986672   14186 main.go:141] libmachine: (addons-289249) DBG | checking permissions on dir: /home
	I1001 17:47:49.986683   14186 main.go:141] libmachine: (addons-289249) DBG | skipping /home - not owner
	I1001 17:47:49.986704   14186 main.go:141] libmachine: (addons-289249) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 17:47:49.986720   14186 main.go:141] libmachine: (addons-289249) defining domain...
	I1001 17:47:49.988225   14186 main.go:141] libmachine: (addons-289249) defining domain using XML: 
	I1001 17:47:49.988238   14186 main.go:141] libmachine: (addons-289249) <domain type='kvm'>
	I1001 17:47:49.988256   14186 main.go:141] libmachine: (addons-289249)   <name>addons-289249</name>
	I1001 17:47:49.988268   14186 main.go:141] libmachine: (addons-289249)   <memory unit='MiB'>4096</memory>
	I1001 17:47:49.988276   14186 main.go:141] libmachine: (addons-289249)   <vcpu>2</vcpu>
	I1001 17:47:49.988282   14186 main.go:141] libmachine: (addons-289249)   <features>
	I1001 17:47:49.988288   14186 main.go:141] libmachine: (addons-289249)     <acpi/>
	I1001 17:47:49.988298   14186 main.go:141] libmachine: (addons-289249)     <apic/>
	I1001 17:47:49.988311   14186 main.go:141] libmachine: (addons-289249)     <pae/>
	I1001 17:47:49.988325   14186 main.go:141] libmachine: (addons-289249)   </features>
	I1001 17:47:49.988335   14186 main.go:141] libmachine: (addons-289249)   <cpu mode='host-passthrough'>
	I1001 17:47:49.988343   14186 main.go:141] libmachine: (addons-289249)   </cpu>
	I1001 17:47:49.988351   14186 main.go:141] libmachine: (addons-289249)   <os>
	I1001 17:47:49.988361   14186 main.go:141] libmachine: (addons-289249)     <type>hvm</type>
	I1001 17:47:49.988367   14186 main.go:141] libmachine: (addons-289249)     <boot dev='cdrom'/>
	I1001 17:47:49.988373   14186 main.go:141] libmachine: (addons-289249)     <boot dev='hd'/>
	I1001 17:47:49.988378   14186 main.go:141] libmachine: (addons-289249)     <bootmenu enable='no'/>
	I1001 17:47:49.988382   14186 main.go:141] libmachine: (addons-289249)   </os>
	I1001 17:47:49.988386   14186 main.go:141] libmachine: (addons-289249)   <devices>
	I1001 17:47:49.988393   14186 main.go:141] libmachine: (addons-289249)     <disk type='file' device='cdrom'>
	I1001 17:47:49.988406   14186 main.go:141] libmachine: (addons-289249)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/boot2docker.iso'/>
	I1001 17:47:49.988419   14186 main.go:141] libmachine: (addons-289249)       <target dev='hdc' bus='scsi'/>
	I1001 17:47:49.988441   14186 main.go:141] libmachine: (addons-289249)       <readonly/>
	I1001 17:47:49.988452   14186 main.go:141] libmachine: (addons-289249)     </disk>
	I1001 17:47:49.988465   14186 main.go:141] libmachine: (addons-289249)     <disk type='file' device='disk'>
	I1001 17:47:49.988478   14186 main.go:141] libmachine: (addons-289249)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 17:47:49.988489   14186 main.go:141] libmachine: (addons-289249)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/addons-289249.rawdisk'/>
	I1001 17:47:49.988496   14186 main.go:141] libmachine: (addons-289249)       <target dev='hda' bus='virtio'/>
	I1001 17:47:49.988501   14186 main.go:141] libmachine: (addons-289249)     </disk>
	I1001 17:47:49.988508   14186 main.go:141] libmachine: (addons-289249)     <interface type='network'>
	I1001 17:47:49.988517   14186 main.go:141] libmachine: (addons-289249)       <source network='mk-addons-289249'/>
	I1001 17:47:49.988528   14186 main.go:141] libmachine: (addons-289249)       <model type='virtio'/>
	I1001 17:47:49.988540   14186 main.go:141] libmachine: (addons-289249)     </interface>
	I1001 17:47:49.988550   14186 main.go:141] libmachine: (addons-289249)     <interface type='network'>
	I1001 17:47:49.988561   14186 main.go:141] libmachine: (addons-289249)       <source network='default'/>
	I1001 17:47:49.988585   14186 main.go:141] libmachine: (addons-289249)       <model type='virtio'/>
	I1001 17:47:49.988608   14186 main.go:141] libmachine: (addons-289249)     </interface>
	I1001 17:47:49.988623   14186 main.go:141] libmachine: (addons-289249)     <serial type='pty'>
	I1001 17:47:49.988634   14186 main.go:141] libmachine: (addons-289249)       <target port='0'/>
	I1001 17:47:49.988643   14186 main.go:141] libmachine: (addons-289249)     </serial>
	I1001 17:47:49.988652   14186 main.go:141] libmachine: (addons-289249)     <console type='pty'>
	I1001 17:47:49.988662   14186 main.go:141] libmachine: (addons-289249)       <target type='serial' port='0'/>
	I1001 17:47:49.988672   14186 main.go:141] libmachine: (addons-289249)     </console>
	I1001 17:47:49.988682   14186 main.go:141] libmachine: (addons-289249)     <rng model='virtio'>
	I1001 17:47:49.988706   14186 main.go:141] libmachine: (addons-289249)       <backend model='random'>/dev/random</backend>
	I1001 17:47:49.988718   14186 main.go:141] libmachine: (addons-289249)     </rng>
	I1001 17:47:49.988724   14186 main.go:141] libmachine: (addons-289249)   </devices>
	I1001 17:47:49.988731   14186 main.go:141] libmachine: (addons-289249) </domain>
	I1001 17:47:49.988738   14186 main.go:141] libmachine: (addons-289249) 
	I1001 17:47:49.995018   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:bb:2f:76 in network default
	I1001 17:47:49.995626   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:49.995653   14186 main.go:141] libmachine: (addons-289249) starting domain...
	I1001 17:47:49.995661   14186 main.go:141] libmachine: (addons-289249) ensuring networks are active...
	I1001 17:47:49.996393   14186 main.go:141] libmachine: (addons-289249) Ensuring network default is active
	I1001 17:47:49.996765   14186 main.go:141] libmachine: (addons-289249) Ensuring network mk-addons-289249 is active
	I1001 17:47:49.997576   14186 main.go:141] libmachine: (addons-289249) getting domain XML...
	I1001 17:47:49.998536   14186 main.go:141] libmachine: (addons-289249) DBG | starting domain XML:
	I1001 17:47:49.998546   14186 main.go:141] libmachine: (addons-289249) DBG | <domain type='kvm'>
	I1001 17:47:49.998572   14186 main.go:141] libmachine: (addons-289249) DBG |   <name>addons-289249</name>
	I1001 17:47:49.998591   14186 main.go:141] libmachine: (addons-289249) DBG |   <uuid>bf149cde-4f8a-4282-b129-ffa0c4de9a2d</uuid>
	I1001 17:47:49.998600   14186 main.go:141] libmachine: (addons-289249) DBG |   <memory unit='KiB'>4194304</memory>
	I1001 17:47:49.998611   14186 main.go:141] libmachine: (addons-289249) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1001 17:47:49.998618   14186 main.go:141] libmachine: (addons-289249) DBG |   <vcpu placement='static'>2</vcpu>
	I1001 17:47:49.998622   14186 main.go:141] libmachine: (addons-289249) DBG |   <os>
	I1001 17:47:49.998641   14186 main.go:141] libmachine: (addons-289249) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1001 17:47:49.998648   14186 main.go:141] libmachine: (addons-289249) DBG |     <boot dev='cdrom'/>
	I1001 17:47:49.998653   14186 main.go:141] libmachine: (addons-289249) DBG |     <boot dev='hd'/>
	I1001 17:47:49.998658   14186 main.go:141] libmachine: (addons-289249) DBG |     <bootmenu enable='no'/>
	I1001 17:47:49.998663   14186 main.go:141] libmachine: (addons-289249) DBG |   </os>
	I1001 17:47:49.998674   14186 main.go:141] libmachine: (addons-289249) DBG |   <features>
	I1001 17:47:49.998678   14186 main.go:141] libmachine: (addons-289249) DBG |     <acpi/>
	I1001 17:47:49.998685   14186 main.go:141] libmachine: (addons-289249) DBG |     <apic/>
	I1001 17:47:49.998691   14186 main.go:141] libmachine: (addons-289249) DBG |     <pae/>
	I1001 17:47:49.998695   14186 main.go:141] libmachine: (addons-289249) DBG |   </features>
	I1001 17:47:49.998701   14186 main.go:141] libmachine: (addons-289249) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1001 17:47:49.998707   14186 main.go:141] libmachine: (addons-289249) DBG |   <clock offset='utc'/>
	I1001 17:47:49.998712   14186 main.go:141] libmachine: (addons-289249) DBG |   <on_poweroff>destroy</on_poweroff>
	I1001 17:47:49.998717   14186 main.go:141] libmachine: (addons-289249) DBG |   <on_reboot>restart</on_reboot>
	I1001 17:47:49.998722   14186 main.go:141] libmachine: (addons-289249) DBG |   <on_crash>destroy</on_crash>
	I1001 17:47:49.998727   14186 main.go:141] libmachine: (addons-289249) DBG |   <devices>
	I1001 17:47:49.998733   14186 main.go:141] libmachine: (addons-289249) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1001 17:47:49.998739   14186 main.go:141] libmachine: (addons-289249) DBG |     <disk type='file' device='cdrom'>
	I1001 17:47:49.998744   14186 main.go:141] libmachine: (addons-289249) DBG |       <driver name='qemu' type='raw'/>
	I1001 17:47:49.998751   14186 main.go:141] libmachine: (addons-289249) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/boot2docker.iso'/>
	I1001 17:47:49.998759   14186 main.go:141] libmachine: (addons-289249) DBG |       <target dev='hdc' bus='scsi'/>
	I1001 17:47:49.998763   14186 main.go:141] libmachine: (addons-289249) DBG |       <readonly/>
	I1001 17:47:49.998786   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1001 17:47:49.998807   14186 main.go:141] libmachine: (addons-289249) DBG |     </disk>
	I1001 17:47:49.998819   14186 main.go:141] libmachine: (addons-289249) DBG |     <disk type='file' device='disk'>
	I1001 17:47:49.998831   14186 main.go:141] libmachine: (addons-289249) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1001 17:47:49.998852   14186 main.go:141] libmachine: (addons-289249) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/addons-289249.rawdisk'/>
	I1001 17:47:49.998867   14186 main.go:141] libmachine: (addons-289249) DBG |       <target dev='hda' bus='virtio'/>
	I1001 17:47:49.998882   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1001 17:47:49.998889   14186 main.go:141] libmachine: (addons-289249) DBG |     </disk>
	I1001 17:47:49.998902   14186 main.go:141] libmachine: (addons-289249) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1001 17:47:49.998915   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1001 17:47:49.998926   14186 main.go:141] libmachine: (addons-289249) DBG |     </controller>
	I1001 17:47:49.998936   14186 main.go:141] libmachine: (addons-289249) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1001 17:47:49.998942   14186 main.go:141] libmachine: (addons-289249) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1001 17:47:49.998958   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1001 17:47:49.998971   14186 main.go:141] libmachine: (addons-289249) DBG |     </controller>
	I1001 17:47:49.998977   14186 main.go:141] libmachine: (addons-289249) DBG |     <interface type='network'>
	I1001 17:47:49.998994   14186 main.go:141] libmachine: (addons-289249) DBG |       <mac address='52:54:00:a8:90:b2'/>
	I1001 17:47:49.999004   14186 main.go:141] libmachine: (addons-289249) DBG |       <source network='mk-addons-289249'/>
	I1001 17:47:49.999010   14186 main.go:141] libmachine: (addons-289249) DBG |       <model type='virtio'/>
	I1001 17:47:49.999019   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1001 17:47:49.999030   14186 main.go:141] libmachine: (addons-289249) DBG |     </interface>
	I1001 17:47:49.999038   14186 main.go:141] libmachine: (addons-289249) DBG |     <interface type='network'>
	I1001 17:47:49.999050   14186 main.go:141] libmachine: (addons-289249) DBG |       <mac address='52:54:00:bb:2f:76'/>
	I1001 17:47:49.999058   14186 main.go:141] libmachine: (addons-289249) DBG |       <source network='default'/>
	I1001 17:47:49.999079   14186 main.go:141] libmachine: (addons-289249) DBG |       <model type='virtio'/>
	I1001 17:47:49.999093   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1001 17:47:49.999100   14186 main.go:141] libmachine: (addons-289249) DBG |     </interface>
	I1001 17:47:49.999112   14186 main.go:141] libmachine: (addons-289249) DBG |     <serial type='pty'>
	I1001 17:47:49.999123   14186 main.go:141] libmachine: (addons-289249) DBG |       <target type='isa-serial' port='0'>
	I1001 17:47:49.999135   14186 main.go:141] libmachine: (addons-289249) DBG |         <model name='isa-serial'/>
	I1001 17:47:49.999147   14186 main.go:141] libmachine: (addons-289249) DBG |       </target>
	I1001 17:47:49.999163   14186 main.go:141] libmachine: (addons-289249) DBG |     </serial>
	I1001 17:47:49.999173   14186 main.go:141] libmachine: (addons-289249) DBG |     <console type='pty'>
	I1001 17:47:49.999182   14186 main.go:141] libmachine: (addons-289249) DBG |       <target type='serial' port='0'/>
	I1001 17:47:49.999195   14186 main.go:141] libmachine: (addons-289249) DBG |     </console>
	I1001 17:47:49.999221   14186 main.go:141] libmachine: (addons-289249) DBG |     <input type='mouse' bus='ps2'/>
	I1001 17:47:49.999239   14186 main.go:141] libmachine: (addons-289249) DBG |     <input type='keyboard' bus='ps2'/>
	I1001 17:47:49.999256   14186 main.go:141] libmachine: (addons-289249) DBG |     <audio id='1' type='none'/>
	I1001 17:47:49.999267   14186 main.go:141] libmachine: (addons-289249) DBG |     <memballoon model='virtio'>
	I1001 17:47:49.999280   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1001 17:47:49.999295   14186 main.go:141] libmachine: (addons-289249) DBG |     </memballoon>
	I1001 17:47:49.999313   14186 main.go:141] libmachine: (addons-289249) DBG |     <rng model='virtio'>
	I1001 17:47:49.999328   14186 main.go:141] libmachine: (addons-289249) DBG |       <backend model='random'>/dev/random</backend>
	I1001 17:47:49.999343   14186 main.go:141] libmachine: (addons-289249) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1001 17:47:49.999352   14186 main.go:141] libmachine: (addons-289249) DBG |     </rng>
	I1001 17:47:49.999360   14186 main.go:141] libmachine: (addons-289249) DBG |   </devices>
	I1001 17:47:49.999368   14186 main.go:141] libmachine: (addons-289249) DBG | </domain>
	I1001 17:47:49.999375   14186 main.go:141] libmachine: (addons-289249) DBG | 
	I1001 17:47:51.267217   14186 main.go:141] libmachine: (addons-289249) waiting for domain to start...
	I1001 17:47:51.268326   14186 main.go:141] libmachine: (addons-289249) domain is now running
	I1001 17:47:51.268354   14186 main.go:141] libmachine: (addons-289249) waiting for IP...
	I1001 17:47:51.269018   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:51.269477   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:51.269501   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:51.269720   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:51.269762   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:51.269710   14214 retry.go:31] will retry after 240.61052ms: waiting for domain to come up
	I1001 17:47:51.513365   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:51.513842   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:51.513865   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:51.514109   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:51.514132   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:51.514080   14214 retry.go:31] will retry after 352.34659ms: waiting for domain to come up
	I1001 17:47:51.867855   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:51.868485   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:51.868522   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:51.868834   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:51.868934   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:51.868865   14214 retry.go:31] will retry after 406.684518ms: waiting for domain to come up
	I1001 17:47:52.277786   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:52.278384   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:52.278404   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:52.278716   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:52.278766   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:52.278711   14214 retry.go:31] will retry after 589.169663ms: waiting for domain to come up
	I1001 17:47:52.869564   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:52.870029   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:52.870048   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:52.870338   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:52.870372   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:52.870319   14214 retry.go:31] will retry after 718.723481ms: waiting for domain to come up
	I1001 17:47:53.590626   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:53.591156   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:53.591176   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:53.591485   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:53.591509   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:53.591456   14214 retry.go:31] will retry after 864.053429ms: waiting for domain to come up
	I1001 17:47:54.457207   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:54.457698   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:54.457718   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:54.457997   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:54.458020   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:54.457979   14214 retry.go:31] will retry after 1.172240528s: waiting for domain to come up
	I1001 17:47:55.632184   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:55.632794   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:55.632816   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:55.633131   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:55.633171   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:55.633121   14214 retry.go:31] will retry after 1.26055064s: waiting for domain to come up
	I1001 17:47:56.895559   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:56.896005   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:56.896030   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:56.896240   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:56.896267   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:56.896222   14214 retry.go:31] will retry after 1.435906966s: waiting for domain to come up
	I1001 17:47:58.333331   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:47:58.333798   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:47:58.333825   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:47:58.334018   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:47:58.334064   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:47:58.334022   14214 retry.go:31] will retry after 1.775788081s: waiting for domain to come up
	I1001 17:48:00.112407   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:00.112773   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:48:00.112800   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:48:00.113024   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:48:00.113046   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:48:00.113008   14214 retry.go:31] will retry after 1.787128523s: waiting for domain to come up
	I1001 17:48:01.903347   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:01.903871   14186 main.go:141] libmachine: (addons-289249) DBG | no network interface addresses found for domain addons-289249 (source=lease)
	I1001 17:48:01.903916   14186 main.go:141] libmachine: (addons-289249) DBG | trying to list again with source=arp
	I1001 17:48:01.904207   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find current IP address of domain addons-289249 in network mk-addons-289249 (interfaces detected: [])
	I1001 17:48:01.904247   14186 main.go:141] libmachine: (addons-289249) DBG | I1001 17:48:01.904196   14214 retry.go:31] will retry after 3.283719124s: waiting for domain to come up
	I1001 17:48:05.190021   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.190654   14186 main.go:141] libmachine: (addons-289249) found domain IP: 192.168.39.98
	I1001 17:48:05.190687   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has current primary IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.190695   14186 main.go:141] libmachine: (addons-289249) reserving static IP address...
	I1001 17:48:05.191059   14186 main.go:141] libmachine: (addons-289249) DBG | unable to find host DHCP lease matching {name: "addons-289249", mac: "52:54:00:a8:90:b2", ip: "192.168.39.98"} in network mk-addons-289249
	I1001 17:48:05.380571   14186 main.go:141] libmachine: (addons-289249) reserved static IP address 192.168.39.98 for domain addons-289249
	I1001 17:48:05.380591   14186 main.go:141] libmachine: (addons-289249) waiting for SSH...
	I1001 17:48:05.380630   14186 main.go:141] libmachine: (addons-289249) DBG | Getting to WaitForSSH function...
	I1001 17:48:05.383463   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.383911   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.383938   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.384119   14186 main.go:141] libmachine: (addons-289249) DBG | Using SSH client type: external
	I1001 17:48:05.384148   14186 main.go:141] libmachine: (addons-289249) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa (-rw-------)
	I1001 17:48:05.384195   14186 main.go:141] libmachine: (addons-289249) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 17:48:05.384212   14186 main.go:141] libmachine: (addons-289249) DBG | About to run SSH command:
	I1001 17:48:05.384243   14186 main.go:141] libmachine: (addons-289249) DBG | exit 0
	I1001 17:48:05.519091   14186 main.go:141] libmachine: (addons-289249) DBG | SSH cmd err, output: <nil>: 
	I1001 17:48:05.519358   14186 main.go:141] libmachine: (addons-289249) domain creation complete
	I1001 17:48:05.519705   14186 main.go:141] libmachine: (addons-289249) Calling .GetConfigRaw
	I1001 17:48:05.520321   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:05.520500   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:05.520648   14186 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 17:48:05.520667   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:05.521956   14186 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 17:48:05.521971   14186 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 17:48:05.521978   14186 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 17:48:05.521986   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:05.524535   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.524938   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.524960   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.525116   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:05.525246   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.525356   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.525462   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:05.525586   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:05.525878   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:05.525894   14186 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 17:48:05.627950   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 17:48:05.627970   14186 main.go:141] libmachine: Detecting the provisioner...
	I1001 17:48:05.627978   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:05.631061   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.631459   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.631491   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.631664   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:05.631863   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.632034   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.632191   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:05.632337   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:05.632621   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:05.632636   14186 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 17:48:05.736569   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1001 17:48:05.736652   14186 main.go:141] libmachine: found compatible host: buildroot
	I1001 17:48:05.736658   14186 main.go:141] libmachine: Provisioning with buildroot...
	I1001 17:48:05.736666   14186 main.go:141] libmachine: (addons-289249) Calling .GetMachineName
	I1001 17:48:05.736894   14186 buildroot.go:166] provisioning hostname "addons-289249"
	I1001 17:48:05.736927   14186 main.go:141] libmachine: (addons-289249) Calling .GetMachineName
	I1001 17:48:05.737119   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:05.740034   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.740479   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.740507   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.740679   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:05.740837   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.741001   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.741146   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:05.741319   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:05.741563   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:05.741575   14186 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-289249 && echo "addons-289249" | sudo tee /etc/hostname
	I1001 17:48:05.872246   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-289249
	
	I1001 17:48:05.872283   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:05.875403   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.875786   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.875816   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.875955   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:05.876151   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.876346   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:05.876535   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:05.876751   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:05.877031   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:05.877059   14186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-289249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-289249/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-289249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 17:48:05.992835   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 17:48:05.992865   14186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 17:48:05.992898   14186 buildroot.go:174] setting up certificates
	I1001 17:48:05.992932   14186 provision.go:84] configureAuth start
	I1001 17:48:05.992945   14186 main.go:141] libmachine: (addons-289249) Calling .GetMachineName
	I1001 17:48:05.993249   14186 main.go:141] libmachine: (addons-289249) Calling .GetIP
	I1001 17:48:05.995943   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.996264   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.996287   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.996517   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:05.998791   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.999192   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:05.999236   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:05.999361   14186 provision.go:143] copyHostCerts
	I1001 17:48:05.999448   14186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 17:48:05.999569   14186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 17:48:05.999642   14186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 17:48:05.999693   14186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.addons-289249 san=[127.0.0.1 192.168.39.98 addons-289249 localhost minikube]
	I1001 17:48:06.530905   14186 provision.go:177] copyRemoteCerts
	I1001 17:48:06.530963   14186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 17:48:06.530990   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:06.533631   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.533946   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:06.533980   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.534136   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:06.534346   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:06.534500   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:06.534667   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:06.616314   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 17:48:06.644198   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 17:48:06.672244   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1001 17:48:06.699197   14186 provision.go:87] duration metric: took 706.251472ms to configureAuth
	I1001 17:48:06.699220   14186 buildroot.go:189] setting minikube options for container-runtime
	I1001 17:48:06.699393   14186 config.go:182] Loaded profile config "addons-289249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 17:48:06.699482   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:06.702359   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.702748   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:06.702778   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.703019   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:06.703225   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:06.703363   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:06.703535   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:06.703718   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:06.703907   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:06.703922   14186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 17:48:06.945060   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 17:48:06.945087   14186 main.go:141] libmachine: Checking connection to Docker...
	I1001 17:48:06.945095   14186 main.go:141] libmachine: (addons-289249) Calling .GetURL
	I1001 17:48:06.946363   14186 main.go:141] libmachine: (addons-289249) DBG | using libvirt version 8000000
	I1001 17:48:06.949087   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.949506   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:06.949533   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.949719   14186 main.go:141] libmachine: Docker is up and running!
	I1001 17:48:06.949734   14186 main.go:141] libmachine: Reticulating splines...
	I1001 17:48:06.949742   14186 client.go:171] duration metric: took 18.24948133s to LocalClient.Create
	I1001 17:48:06.949769   14186 start.go:167] duration metric: took 18.249552303s to libmachine.API.Create "addons-289249"
	I1001 17:48:06.949782   14186 start.go:293] postStartSetup for "addons-289249" (driver="kvm2")
	I1001 17:48:06.949795   14186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 17:48:06.949831   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:06.950102   14186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 17:48:06.950124   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:06.953035   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.953446   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:06.953474   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:06.953646   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:06.953817   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:06.953971   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:06.954129   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:07.036481   14186 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 17:48:07.041086   14186 info.go:137] Remote host: Buildroot 2025.02
	I1001 17:48:07.041111   14186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 17:48:07.041179   14186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 17:48:07.041212   14186 start.go:296] duration metric: took 91.422211ms for postStartSetup
	I1001 17:48:07.041250   14186 main.go:141] libmachine: (addons-289249) Calling .GetConfigRaw
	I1001 17:48:07.041942   14186 main.go:141] libmachine: (addons-289249) Calling .GetIP
	I1001 17:48:07.044743   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.045137   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:07.045171   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.045398   14186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/config.json ...
	I1001 17:48:07.045619   14186 start.go:128] duration metric: took 18.361166538s to createHost
	I1001 17:48:07.045642   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:07.047932   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.048260   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:07.048283   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.048480   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:07.048635   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:07.048795   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:07.048943   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:07.049116   14186 main.go:141] libmachine: Using SSH client type: native
	I1001 17:48:07.049303   14186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I1001 17:48:07.049314   14186 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 17:48:07.150547   14186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759340887.121550731
	
	I1001 17:48:07.150579   14186 fix.go:216] guest clock: 1759340887.121550731
	I1001 17:48:07.150588   14186 fix.go:229] Guest: 2025-10-01 17:48:07.121550731 +0000 UTC Remote: 2025-10-01 17:48:07.045632452 +0000 UTC m=+18.588625738 (delta=75.918279ms)
	I1001 17:48:07.150614   14186 fix.go:200] guest clock delta is within tolerance: 75.918279ms
	I1001 17:48:07.150620   14186 start.go:83] releasing machines lock for "addons-289249", held for 18.466263313s
	I1001 17:48:07.150648   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:07.150930   14186 main.go:141] libmachine: (addons-289249) Calling .GetIP
	I1001 17:48:07.153873   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.154327   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:07.154359   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.154501   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:07.155002   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:07.155174   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:07.155257   14186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 17:48:07.155300   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:07.155361   14186 ssh_runner.go:195] Run: cat /version.json
	I1001 17:48:07.155385   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:07.158615   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.158785   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.159058   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:07.159078   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.159103   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:07.159119   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:07.159279   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:07.159455   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:07.159504   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:07.159598   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:07.159663   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:07.159758   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:07.159822   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:07.159922   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:07.268569   14186 ssh_runner.go:195] Run: systemctl --version
	I1001 17:48:07.274648   14186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 17:48:07.431232   14186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 17:48:07.437689   14186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 17:48:07.437757   14186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 17:48:07.456277   14186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 17:48:07.456304   14186 start.go:495] detecting cgroup driver to use...
	I1001 17:48:07.456355   14186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 17:48:07.475002   14186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 17:48:07.491026   14186 docker.go:218] disabling cri-docker service (if available) ...
	I1001 17:48:07.491075   14186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 17:48:07.507528   14186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 17:48:07.522920   14186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 17:48:07.669025   14186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 17:48:07.885341   14186 docker.go:234] disabling docker service ...
	I1001 17:48:07.885411   14186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 17:48:07.901541   14186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 17:48:07.916360   14186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 17:48:08.077442   14186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 17:48:08.217290   14186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 17:48:08.232607   14186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 17:48:08.254809   14186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1001 17:48:08.254874   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.266890   14186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 17:48:08.266961   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.279174   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.292667   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.306975   14186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 17:48:08.320078   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.332500   14186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.352356   14186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 17:48:08.364178   14186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 17:48:08.374385   14186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 17:48:08.374462   14186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 17:48:08.393222   14186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 17:48:08.404454   14186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 17:48:08.545026   14186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 17:48:08.663655   14186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 17:48:08.663748   14186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 17:48:08.669615   14186 start.go:563] Will wait 60s for crictl version
	I1001 17:48:08.669684   14186 ssh_runner.go:195] Run: which crictl
	I1001 17:48:08.673919   14186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 17:48:08.717992   14186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 17:48:08.718116   14186 ssh_runner.go:195] Run: crio --version
	I1001 17:48:08.753003   14186 ssh_runner.go:195] Run: crio --version
	I1001 17:48:08.787740   14186 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1001 17:48:08.788791   14186 main.go:141] libmachine: (addons-289249) Calling .GetIP
	I1001 17:48:08.791938   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:08.792362   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:08.792395   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:08.792668   14186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 17:48:08.796793   14186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 17:48:08.811501   14186 kubeadm.go:875] updating cluster {Name:addons-289249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-289
249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 17:48:08.811622   14186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 17:48:08.811685   14186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 17:48:08.852112   14186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1001 17:48:08.852176   14186 ssh_runner.go:195] Run: which lz4
	I1001 17:48:08.856172   14186 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 17:48:08.861157   14186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 17:48:08.861186   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1001 17:48:10.258169   14186 crio.go:462] duration metric: took 1.402017574s to copy over tarball
	I1001 17:48:10.258249   14186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 17:48:11.921261   14186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.662984736s)
	I1001 17:48:11.921286   14186 crio.go:469] duration metric: took 1.663089363s to extract the tarball
	I1001 17:48:11.921293   14186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 17:48:11.961777   14186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 17:48:12.005080   14186 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 17:48:12.005109   14186 cache_images.go:85] Images are preloaded, skipping loading
	I1001 17:48:12.005118   14186 kubeadm.go:926] updating node { 192.168.39.98 8443 v1.34.1 crio true true} ...
	I1001 17:48:12.005225   14186 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-289249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-289249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 17:48:12.005289   14186 ssh_runner.go:195] Run: crio config
	I1001 17:48:12.050939   14186 cni.go:84] Creating CNI manager for ""
	I1001 17:48:12.050964   14186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 17:48:12.050979   14186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 17:48:12.051003   14186 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.98 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-289249 NodeName:addons-289249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 17:48:12.051122   14186 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-289249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 17:48:12.051198   14186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1001 17:48:12.062911   14186 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 17:48:12.062970   14186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 17:48:12.073944   14186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 17:48:12.093173   14186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 17:48:12.112068   14186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1001 17:48:12.131129   14186 ssh_runner.go:195] Run: grep 192.168.39.98	control-plane.minikube.internal$ /etc/hosts
	I1001 17:48:12.134840   14186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 17:48:12.148164   14186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 17:48:12.288627   14186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 17:48:12.322202   14186 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249 for IP: 192.168.39.98
	I1001 17:48:12.322223   14186 certs.go:194] generating shared ca certs ...
	I1001 17:48:12.322239   14186 certs.go:226] acquiring lock for ca certs: {Name:mkce5c4f8bce1e11a833f05c4b70f07050ce8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.322378   14186 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key
	I1001 17:48:12.370083   14186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt ...
	I1001 17:48:12.370109   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt: {Name:mk43fb5e018b94259616695a7db25b62a7abea54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.370276   14186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key ...
	I1001 17:48:12.370287   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key: {Name:mk380ffcb6753e4f1df595e72caacafb3f670b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.370355   14186 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key
	I1001 17:48:12.560248   14186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt ...
	I1001 17:48:12.560316   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt: {Name:mk6a45c5d5b99c7cb442186cae90544ff23461f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.560505   14186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key ...
	I1001 17:48:12.560519   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key: {Name:mkce3910d209840030e07cb896414b0a712a0566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.560603   14186 certs.go:256] generating profile certs ...
	I1001 17:48:12.560667   14186 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.key
	I1001 17:48:12.560686   14186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt with IP's: []
	I1001 17:48:12.760952   14186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt ...
	I1001 17:48:12.760982   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: {Name:mk4fb3b543e6d25295031a023d54135cedd59ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.761141   14186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.key ...
	I1001 17:48:12.761152   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.key: {Name:mkefe19adc192fa629a1baf3f3d4ef90d8d6a8e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:12.761720   14186 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key.38893269
	I1001 17:48:12.761740   14186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt.38893269 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.98]
	I1001 17:48:13.156035   14186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt.38893269 ...
	I1001 17:48:13.156066   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt.38893269: {Name:mkf48d94062c3578e0e25b4e5f3ddf56fd29961f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:13.156222   14186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key.38893269 ...
	I1001 17:48:13.156234   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key.38893269: {Name:mkf2d3b476ab31110fad8e3cc550769d405472ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:13.156304   14186 certs.go:381] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt.38893269 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt
	I1001 17:48:13.156405   14186 certs.go:385] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key.38893269 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key
	I1001 17:48:13.156473   14186 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.key
	I1001 17:48:13.156491   14186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.crt with IP's: []
	I1001 17:48:13.509200   14186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.crt ...
	I1001 17:48:13.509231   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.crt: {Name:mk1cd9b1bb94b8b0bb478fa0c393b01da8c0dc5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:13.509409   14186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.key ...
	I1001 17:48:13.509424   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.key: {Name:mk8d7ff7ca7c67e315ea23700c652d529b0b3b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:13.509635   14186 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 17:48:13.509669   14186 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem (1082 bytes)
	I1001 17:48:13.509695   14186 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem (1123 bytes)
	I1001 17:48:13.509715   14186 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem (1675 bytes)
	I1001 17:48:13.510258   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 17:48:13.539516   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 17:48:13.566834   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 17:48:13.596186   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 17:48:13.624881   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 17:48:13.653741   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 17:48:13.682364   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 17:48:13.710295   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 17:48:13.738450   14186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 17:48:13.766451   14186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 17:48:13.785473   14186 ssh_runner.go:195] Run: openssl version
	I1001 17:48:13.791521   14186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 17:48:13.804258   14186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 17:48:13.809288   14186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 17:48:13.809358   14186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 17:48:13.816414   14186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 17:48:13.829256   14186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 17:48:13.833674   14186 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 17:48:13.833726   14186 kubeadm.go:392] StartCluster: {Name:addons-289249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-289249
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:48:13.833823   14186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 17:48:13.833894   14186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 17:48:13.870579   14186 cri.go:89] found id: ""
	I1001 17:48:13.870653   14186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 17:48:13.882357   14186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 17:48:13.897270   14186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 17:48:13.909180   14186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 17:48:13.909201   14186 kubeadm.go:157] found existing configuration files:
	
	I1001 17:48:13.909257   14186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 17:48:13.920357   14186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 17:48:13.920444   14186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 17:48:13.932684   14186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 17:48:13.943800   14186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 17:48:13.943869   14186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 17:48:13.955257   14186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 17:48:13.967592   14186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 17:48:13.967648   14186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 17:48:13.979241   14186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 17:48:13.990877   14186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 17:48:13.990951   14186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 17:48:14.002867   14186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 17:48:14.053534   14186 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I1001 17:48:14.053603   14186 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 17:48:14.151945   14186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 17:48:14.152059   14186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 17:48:14.152195   14186 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 17:48:14.162204   14186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 17:48:14.381911   14186 out.go:252]   - Generating certificates and keys ...
	I1001 17:48:14.382054   14186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 17:48:14.382179   14186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 17:48:14.484796   14186 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 17:48:14.576926   14186 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 17:48:14.670310   14186 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 17:48:14.746863   14186 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 17:48:15.045171   14186 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 17:48:15.045295   14186 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-289249 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I1001 17:48:15.781724   14186 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 17:48:15.781847   14186 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-289249 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I1001 17:48:16.073729   14186 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 17:48:16.382974   14186 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 17:48:16.669459   14186 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 17:48:16.669526   14186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 17:48:16.815031   14186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 17:48:17.143366   14186 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 17:48:17.439065   14186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 17:48:17.601080   14186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 17:48:17.645602   14186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 17:48:17.646211   14186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 17:48:17.648391   14186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 17:48:17.650990   14186 out.go:252]   - Booting up control plane ...
	I1001 17:48:17.651087   14186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 17:48:17.651172   14186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 17:48:17.652259   14186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 17:48:17.669107   14186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 17:48:17.669289   14186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1001 17:48:17.676379   14186 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1001 17:48:17.676700   14186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 17:48:17.676774   14186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 17:48:17.849967   14186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 17:48:17.850083   14186 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 17:48:18.352483   14186 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.909625ms
	I1001 17:48:18.363146   14186 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1001 17:48:18.363281   14186 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.98:8443/livez
	I1001 17:48:18.363546   14186 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1001 17:48:18.364127   14186 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1001 17:48:20.713227   14186 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.350810947s
	I1001 17:48:22.810397   14186 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.449861609s
	I1001 17:48:24.361744   14186 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001726947s
	I1001 17:48:24.378357   14186 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 17:48:24.393385   14186 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 17:48:24.408710   14186 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 17:48:24.408983   14186 kubeadm.go:310] [mark-control-plane] Marking the node addons-289249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 17:48:24.424321   14186 kubeadm.go:310] [bootstrap-token] Using token: dcf4ry.bmj3tr7urbkcpjkw
	I1001 17:48:24.425609   14186 out.go:252]   - Configuring RBAC rules ...
	I1001 17:48:24.425728   14186 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 17:48:24.430404   14186 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 17:48:24.437371   14186 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 17:48:24.443048   14186 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 17:48:24.445781   14186 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 17:48:24.449178   14186 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 17:48:24.770402   14186 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 17:48:25.201614   14186 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 17:48:25.771011   14186 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 17:48:25.771906   14186 kubeadm.go:310] 
	I1001 17:48:25.771995   14186 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 17:48:25.772008   14186 kubeadm.go:310] 
	I1001 17:48:25.772077   14186 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 17:48:25.772083   14186 kubeadm.go:310] 
	I1001 17:48:25.772109   14186 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 17:48:25.772175   14186 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 17:48:25.772238   14186 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 17:48:25.772259   14186 kubeadm.go:310] 
	I1001 17:48:25.772369   14186 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 17:48:25.772391   14186 kubeadm.go:310] 
	I1001 17:48:25.772482   14186 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 17:48:25.772493   14186 kubeadm.go:310] 
	I1001 17:48:25.772571   14186 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 17:48:25.772701   14186 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 17:48:25.772857   14186 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 17:48:25.772867   14186 kubeadm.go:310] 
	I1001 17:48:25.772975   14186 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 17:48:25.773066   14186 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 17:48:25.773079   14186 kubeadm.go:310] 
	I1001 17:48:25.773195   14186 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dcf4ry.bmj3tr7urbkcpjkw \
	I1001 17:48:25.773351   14186 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbcb137d3fae8b26e7a39819525d4d9dcd5cccec4e46324317306fb87c30e08c \
	I1001 17:48:25.773388   14186 kubeadm.go:310] 	--control-plane 
	I1001 17:48:25.773401   14186 kubeadm.go:310] 
	I1001 17:48:25.773571   14186 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 17:48:25.773588   14186 kubeadm.go:310] 
	I1001 17:48:25.773706   14186 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dcf4ry.bmj3tr7urbkcpjkw \
	I1001 17:48:25.773838   14186 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbcb137d3fae8b26e7a39819525d4d9dcd5cccec4e46324317306fb87c30e08c 
	I1001 17:48:25.775141   14186 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 17:48:25.775166   14186 cni.go:84] Creating CNI manager for ""
	I1001 17:48:25.775177   14186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 17:48:25.776711   14186 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 17:48:25.778164   14186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 17:48:25.790611   14186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 17:48:25.812802   14186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 17:48:25.812932   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:25.812938   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-289249 minikube.k8s.io/updated_at=2025_10_01T17_48_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492 minikube.k8s.io/name=addons-289249 minikube.k8s.io/primary=true
	I1001 17:48:25.853571   14186 ops.go:34] apiserver oom_adj: -16
	I1001 17:48:25.927647   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:26.428294   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:26.927762   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:27.427952   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:27.928540   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:28.428612   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:28.927751   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:29.427837   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:29.928343   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:30.428274   14186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 17:48:30.502957   14186 kubeadm.go:1105] duration metric: took 4.690137027s to wait for elevateKubeSystemPrivileges
	I1001 17:48:30.503004   14186 kubeadm.go:394] duration metric: took 16.66928008s to StartCluster
	I1001 17:48:30.503030   14186 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:30.503175   14186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:48:30.503777   14186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 17:48:30.504048   14186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 17:48:30.504097   14186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 17:48:30.504160   14186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 17:48:30.504309   14186 addons.go:69] Setting yakd=true in profile "addons-289249"
	I1001 17:48:30.504315   14186 config.go:182] Loaded profile config "addons-289249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 17:48:30.504311   14186 addons.go:69] Setting default-storageclass=true in profile "addons-289249"
	I1001 17:48:30.504331   14186 addons.go:238] Setting addon yakd=true in "addons-289249"
	I1001 17:48:30.504339   14186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-289249"
	I1001 17:48:30.504358   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.504365   14186 addons.go:69] Setting gcp-auth=true in profile "addons-289249"
	I1001 17:48:30.504383   14186 mustload.go:65] Loading cluster: addons-289249
	I1001 17:48:30.504342   14186 addons.go:69] Setting cloud-spanner=true in profile "addons-289249"
	I1001 17:48:30.504415   14186 addons.go:238] Setting addon cloud-spanner=true in "addons-289249"
	I1001 17:48:30.504408   14186 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-289249"
	I1001 17:48:30.504497   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.504478   14186 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-289249"
	I1001 17:48:30.504517   14186 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-289249"
	I1001 17:48:30.504536   14186 config.go:182] Loaded profile config "addons-289249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 17:48:30.504546   14186 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-289249"
	I1001 17:48:30.505074   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505130   14186 addons.go:69] Setting registry=true in profile "addons-289249"
	I1001 17:48:30.505144   14186 addons.go:238] Setting addon registry=true in "addons-289249"
	I1001 17:48:30.505143   14186 addons.go:69] Setting registry-creds=true in profile "addons-289249"
	I1001 17:48:30.505163   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505161   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505164   14186 addons.go:238] Setting addon registry-creds=true in "addons-289249"
	I1001 17:48:30.505170   14186 addons.go:69] Setting ingress-dns=true in profile "addons-289249"
	I1001 17:48:30.505191   14186 addons.go:238] Setting addon ingress-dns=true in "addons-289249"
	I1001 17:48:30.505200   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505226   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505574   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.505127   14186 addons.go:69] Setting storage-provisioner=true in profile "addons-289249"
	I1001 17:48:30.505627   14186 addons.go:238] Setting addon storage-provisioner=true in "addons-289249"
	I1001 17:48:30.505649   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.505653   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.505650   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.505678   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.505680   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.505690   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.505698   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.505719   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.506026   14186 addons.go:69] Setting ingress=true in profile "addons-289249"
	I1001 17:48:30.506053   14186 addons.go:238] Setting addon ingress=true in "addons-289249"
	I1001 17:48:30.506095   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.506174   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.506225   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.506259   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.506371   14186 addons.go:69] Setting metrics-server=true in profile "addons-289249"
	I1001 17:48:30.506395   14186 addons.go:238] Setting addon metrics-server=true in "addons-289249"
	I1001 17:48:30.506439   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.506872   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.506902   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.507057   14186 addons.go:69] Setting inspektor-gadget=true in profile "addons-289249"
	I1001 17:48:30.507068   14186 addons.go:238] Setting addon inspektor-gadget=true in "addons-289249"
	I1001 17:48:30.507096   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.507595   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.506103   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.507652   14186 addons.go:69] Setting volumesnapshots=true in profile "addons-289249"
	I1001 17:48:30.507671   14186 addons.go:238] Setting addon volumesnapshots=true in "addons-289249"
	I1001 17:48:30.507713   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.504359   14186 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-289249"
	I1001 17:48:30.507752   14186 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-289249"
	I1001 17:48:30.507775   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.508091   14186 out.go:179] * Verifying Kubernetes components...
	I1001 17:48:30.507627   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.508152   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.508187   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.508352   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.508402   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.508406   14186 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-289249"
	I1001 17:48:30.508440   14186 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-289249"
	I1001 17:48:30.508795   14186 addons.go:69] Setting volcano=true in profile "addons-289249"
	I1001 17:48:30.510684   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.510723   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.512460   14186 addons.go:238] Setting addon volcano=true in "addons-289249"
	I1001 17:48:30.512503   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.512726   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.512792   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.512921   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.512964   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.513026   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.513048   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.518948   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.518985   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.519002   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.519019   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.519535   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.519576   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.533055   14186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 17:48:30.533196   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I1001 17:48:30.533347   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1001 17:48:30.534145   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.534647   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.534681   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.534988   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.535523   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.535537   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.535598   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.535880   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.536470   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.536500   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.540076   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.540207   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.541244   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1001 17:48:30.542474   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.543106   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.543128   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.543804   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.544419   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.544471   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.544710   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I1001 17:48:30.544907   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I1001 17:48:30.551602   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I1001 17:48:30.551778   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.551899   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I1001 17:48:30.555543   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I1001 17:48:30.555637   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.555671   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.555691   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.555757   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.559663   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.559689   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.559720   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.559736   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.559844   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.559864   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.560413   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.560460   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.561320   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.561390   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.561561   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.562156   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.562196   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.562634   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.562654   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.562717   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1001 17:48:30.562769   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.562781   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.563296   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.563360   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.563805   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.563921   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.563941   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.564691   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.564728   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.564967   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I1001 17:48:30.570564   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1001 17:48:30.570566   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I1001 17:48:30.570571   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.571196   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.570580   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.571246   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.570600   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I1001 17:48:30.570603   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.572186   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.572205   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.572595   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.572915   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.572928   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.573447   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.573461   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.573810   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.573884   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.573937   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.574498   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.574959   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.574986   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.574992   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.575007   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.575162   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.575173   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.575215   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.579833   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.579858   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1001 17:48:30.579840   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.579912   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.579885   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1001 17:48:30.579963   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40061
	I1001 17:48:30.580224   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.580825   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.580845   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.580864   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.580947   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.581303   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.581456   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.581470   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.582685   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.582780   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.583214   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.583234   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.583555   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.583601   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.584233   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.584279   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.584343   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.584490   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.584918   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.585521   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.585807   14186 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 17:48:30.585007   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.586128   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.586801   14186 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1001 17:48:30.586877   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.587732   14186 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 17:48:30.587749   14186 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 17:48:30.587767   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.588614   14186 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 17:48:30.588658   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1001 17:48:30.588678   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.592329   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I1001 17:48:30.594915   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.595042   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.595075   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.595120   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I1001 17:48:30.596299   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.597022   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.597043   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.597180   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.597192   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.599397   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.599761   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.600107   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.604947   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.604985   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.604997   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I1001 17:48:30.605013   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.605038   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.605063   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.605080   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.605096   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.604947   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.605116   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.604951   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1001 17:48:30.605283   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.605585   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.605851   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.606036   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.606052   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.606101   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.606341   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.606483   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.606746   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.606762   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.607157   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.607386   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.608661   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.609114   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 17:48:30.609930   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1001 17:48:30.610139   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.610160   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.610213   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I1001 17:48:30.610978   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 17:48:30.610998   14186 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 17:48:30.611015   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.611101   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.611218   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.611364   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.611559   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.611920   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I1001 17:48:30.611766   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.612115   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.612937   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I1001 17:48:30.613299   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.613318   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.613766   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.613851   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.613902   14186 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1001 17:48:30.614553   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.614597   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.614685   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.614699   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.614950   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.615125   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.615276   14186 addons.go:238] Setting addon default-storageclass=true in "addons-289249"
	I1001 17:48:30.615318   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.615425   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.615718   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.615752   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.615759   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.615983   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.616047   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.617751   14186 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 17:48:30.617780   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.617794   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.618151   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.618490   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.620150   14186 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 17:48:30.620587   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.621088   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.621501   14186 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-289249"
	I1001 17:48:30.621549   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:30.621901   14186 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 17:48:30.621928   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 17:48:30.621937   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.621947   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.621974   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.622017   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.623524   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.623628   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.623647   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.623841   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.624794   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.624865   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1001 17:48:30.624863   14186 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1001 17:48:30.625187   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.625243   14186 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1001 17:48:30.625380   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.626060   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.626074   14186 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 17:48:30.626089   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 17:48:30.626104   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.625422   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I1001 17:48:30.627018   14186 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1001 17:48:30.627036   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1001 17:48:30.627052   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.627742   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1001 17:48:30.627803   14186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 17:48:30.628738   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.628758   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.630034   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.630384   14186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 17:48:30.630560   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 17:48:30.630627   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.631026   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.631245   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.631198   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.631811   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.631909   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.632317   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.632334   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.632696   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.632914   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.633036   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.633071   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.633125   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.633441   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.633904   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.634330   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.634596   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.634819   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.635213   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.637279   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.637300   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.637330   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45013
	I1001 17:48:30.637654   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.637750   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.637662   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.638175   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:30.638189   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:30.638180   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.638476   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.638499   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.638512   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:30.638785   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.638803   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:30.638810   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:30.638819   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:30.638827   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:30.638934   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I1001 17:48:30.639189   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.639411   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.639463   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.639475   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.639606   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.639665   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.639681   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.639692   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.640124   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.640125   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.640207   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.640385   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I1001 17:48:30.640515   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:30.640524   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 17:48:30.640590   14186 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 17:48:30.641171   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.641331   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.641391   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.641528   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.641541   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.641555   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.641572   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.641528   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1001 17:48:30.642014   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.642105   14186 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1001 17:48:30.642296   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.642423   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.642611   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.643542   14186 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1001 17:48:30.643560   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 17:48:30.643576   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.643866   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.644253   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.644306   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.644506   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.644638   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.644703   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.644810   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.645052   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.645945   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.646149   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.647000   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.647310   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.649087   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.649244   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.649824   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.650411   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.650599   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.650891   14186 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1001 17:48:30.650930   14186 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1001 17:48:30.651221   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.651322   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.651407   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.651610   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.651687   14186 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1001 17:48:30.651822   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.652251   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.652411   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 17:48:30.652821   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36065
	I1001 17:48:30.652527   14186 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1001 17:48:30.652951   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1001 17:48:30.652967   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.652768   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I1001 17:48:30.653346   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.653450   14186 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 17:48:30.653469   14186 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 17:48:30.653485   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.653500   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.653827   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.653840   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.653982   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.654031   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.654516   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.654516   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.654755   14186 out.go:179]   - Using image docker.io/registry:3.0.0
	I1001 17:48:30.654756   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.655266   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.655338   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.656302   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 17:48:30.656446   14186 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 17:48:30.656459   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 17:48:30.656476   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.659033   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1001 17:48:30.659261   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.659400   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.659785   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.659858   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 17:48:30.660193   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.660383   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.660339   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.660553   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.660561   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.660804   14186 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1001 17:48:30.660892   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.661013   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.661405   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.661508   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.662209   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.661736   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.662395   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.661894   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.661956   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.662068   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:30.662498   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.662501   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:30.662472   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.662594   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.662624   14186 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 17:48:30.662640   14186 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1001 17:48:30.662660   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.662663   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.662755   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.662820   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.663024   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.663021   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.663249   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.663549   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 17:48:30.665147   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 17:48:30.666257   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.666793   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.666824   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.667022   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.667151   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.667245   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.667347   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.667996   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 17:48:30.669337   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 17:48:30.670833   14186 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 17:48:30.672030   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 17:48:30.672043   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 17:48:30.672058   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.676174   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.676836   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.676868   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.677031   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.677232   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.677336   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I1001 17:48:30.677406   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.677570   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.677885   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.678421   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.678455   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.678811   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.678990   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.681047   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.681275   14186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 17:48:30.681291   14186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 17:48:30.681307   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.682034   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I1001 17:48:30.682536   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:30.683159   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:30.683182   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:30.683572   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:30.683773   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:30.685097   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.685588   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:30.685599   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.685636   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.685852   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.686012   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.686168   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.686312   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:30.687594   14186 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 17:48:30.688813   14186 out.go:179]   - Using image docker.io/busybox:stable
	I1001 17:48:30.689950   14186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 17:48:30.689965   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 17:48:30.689985   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:30.693290   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.693774   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:30.693813   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:30.693986   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:30.694179   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:30.694373   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:30.694531   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	W1001 17:48:31.027834   14186 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42166->192.168.39.98:22: read: connection reset by peer
	I1001 17:48:31.027878   14186 retry.go:31] will retry after 307.886743ms: ssh: handshake failed: read tcp 192.168.39.1:42166->192.168.39.98:22: read: connection reset by peer
	W1001 17:48:31.112973   14186 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42182->192.168.39.98:22: read: connection reset by peer
	I1001 17:48:31.113003   14186 retry.go:31] will retry after 281.844717ms: ssh: handshake failed: read tcp 192.168.39.1:42182->192.168.39.98:22: read: connection reset by peer
	I1001 17:48:31.518216   14186 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.014133661s)
	I1001 17:48:31.518288   14186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 17:48:31.518366   14186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 17:48:31.630264   14186 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 17:48:31.630302   14186 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 17:48:31.750282   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 17:48:31.761188   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 17:48:31.811311   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 17:48:31.834284   14186 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 17:48:31.834319   14186 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 17:48:31.844674   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 17:48:31.845089   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1001 17:48:31.862517   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1001 17:48:31.896225   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 17:48:31.940773   14186 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 17:48:31.940799   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 17:48:31.962934   14186 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:31.962956   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1001 17:48:32.033482   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 17:48:32.072920   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 17:48:32.072953   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 17:48:32.197709   14186 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 17:48:32.197740   14186 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 17:48:32.212498   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 17:48:32.275561   14186 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 17:48:32.275622   14186 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 17:48:32.335204   14186 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 17:48:32.335231   14186 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 17:48:32.345144   14186 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 17:48:32.345172   14186 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 17:48:32.406558   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:32.524528   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 17:48:32.524556   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 17:48:32.527041   14186 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 17:48:32.527066   14186 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 17:48:32.564947   14186 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 17:48:32.564981   14186 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 17:48:32.630621   14186 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 17:48:32.630646   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 17:48:32.685670   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 17:48:32.685705   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 17:48:32.696085   14186 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 17:48:32.696112   14186 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 17:48:32.696364   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 17:48:32.823044   14186 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 17:48:32.823071   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 17:48:32.909829   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 17:48:32.967831   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 17:48:32.967869   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 17:48:32.978757   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 17:48:32.978783   14186 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 17:48:33.177772   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 17:48:33.274678   14186 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 17:48:33.274700   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 17:48:33.285983   14186 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 17:48:33.286018   14186 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 17:48:33.694101   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 17:48:33.694126   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 17:48:33.708118   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 17:48:34.122285   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 17:48:34.122319   14186 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 17:48:34.564578   14186 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.046186505s)
	I1001 17:48:34.564635   14186 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.046320221s)
	I1001 17:48:34.564641   14186 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 17:48:34.564744   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.814431693s)
	I1001 17:48:34.564784   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:34.564797   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:34.565082   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:34.565130   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:34.565141   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:34.565156   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:34.565168   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:34.565393   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:34.565410   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:34.565420   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:34.565564   14186 node_ready.go:35] waiting up to 6m0s for node "addons-289249" to be "Ready" ...
	I1001 17:48:34.574720   14186 node_ready.go:49] node "addons-289249" is "Ready"
	I1001 17:48:34.574751   14186 node_ready.go:38] duration metric: took 9.156186ms for node "addons-289249" to be "Ready" ...
	I1001 17:48:34.574764   14186 api_server.go:52] waiting for apiserver process to appear ...
	I1001 17:48:34.574807   14186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 17:48:34.637226   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 17:48:34.637246   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 17:48:35.072613   14186 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-289249" context rescaled to 1 replicas
	I1001 17:48:35.088519   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 17:48:35.088547   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 17:48:35.426632   14186 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 17:48:35.426665   14186 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 17:48:35.526940   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 17:48:36.999318   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.238094117s)
	I1001 17:48:36.999343   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.187993234s)
	I1001 17:48:36.999370   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:36.999388   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:36.999401   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:36.999466   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:36.999860   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:36.999879   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:36.999889   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:36.999897   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:36.999912   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:36.999923   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:36.999943   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:36.999952   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:36.999959   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:37.000171   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:37.000199   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:37.000215   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:37.000215   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:37.000200   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:37.000231   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.060715   14186 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 17:48:38.060756   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:38.064125   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:38.064680   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:38.064709   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:38.064962   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:38.065161   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:38.065336   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:38.065510   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:38.434256   14186 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 17:48:38.554119   14186 addons.go:238] Setting addon gcp-auth=true in "addons-289249"
	I1001 17:48:38.554166   14186 host.go:66] Checking if "addons-289249" exists ...
	I1001 17:48:38.554446   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:38.554469   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:38.568712   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I1001 17:48:38.569250   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:38.569781   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:38.569806   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:38.570140   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:38.570764   14186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:48:38.570800   14186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:48:38.584446   14186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I1001 17:48:38.585011   14186 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:48:38.585514   14186 main.go:141] libmachine: Using API Version  1
	I1001 17:48:38.585535   14186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:48:38.585874   14186 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:48:38.586075   14186 main.go:141] libmachine: (addons-289249) Calling .GetState
	I1001 17:48:38.588204   14186 main.go:141] libmachine: (addons-289249) Calling .DriverName
	I1001 17:48:38.588416   14186 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 17:48:38.588476   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHHostname
	I1001 17:48:38.592361   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:38.592941   14186 main.go:141] libmachine: (addons-289249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:90:b2", ip: ""} in network mk-addons-289249: {Iface:virbr1 ExpiryTime:2025-10-01 18:48:04 +0000 UTC Type:0 Mac:52:54:00:a8:90:b2 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-289249 Clientid:01:52:54:00:a8:90:b2}
	I1001 17:48:38.592964   14186 main.go:141] libmachine: (addons-289249) DBG | domain addons-289249 has defined IP address 192.168.39.98 and MAC address 52:54:00:a8:90:b2 in network mk-addons-289249
	I1001 17:48:38.593212   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHPort
	I1001 17:48:38.593398   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHKeyPath
	I1001 17:48:38.593582   14186 main.go:141] libmachine: (addons-289249) Calling .GetSSHUsername
	I1001 17:48:38.593758   14186 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/addons-289249/id_rsa Username:docker}
	I1001 17:48:38.888742   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.043623095s)
	I1001 17:48:38.888796   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.888809   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.888916   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.026367419s)
	I1001 17:48:38.888960   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.888973   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.888968   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.044259304s)
	I1001 17:48:38.889002   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889022   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.992760245s)
	I1001 17:48:38.889047   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889060   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889074   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889113   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889120   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889125   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.889134   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889142   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889155   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.676626269s)
	I1001 17:48:38.889167   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.855655367s)
	I1001 17:48:38.889193   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889207   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889175   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889249   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889259   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.482676732s)
	W1001 17:48:38.889278   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:38.889294   14186 retry.go:31] will retry after 246.649531ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:38.889314   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.192928978s)
	I1001 17:48:38.889331   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.979469019s)
	I1001 17:48:38.889334   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889343   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889347   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889355   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889412   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889413   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.71161553s)
	I1001 17:48:38.889420   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889419   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889452   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.889453   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889460   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889460   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889466   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889467   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889472   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.889481   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889487   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889525   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889548   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889554   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.889574   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.889581   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.889659   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889669   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.889681   14186 addons.go:479] Verifying addon ingress=true in "addons-289249"
	I1001 17:48:38.889788   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889812   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.889820   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.890086   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.890116   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.890123   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.890131   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.890139   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.890200   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.890227   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.890246   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.890254   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.890260   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.891448   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.891467   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.891475   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.891482   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.891544   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.891578   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.891585   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.891750   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.891781   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.891789   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.891902   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.891921   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.891931   14186 addons.go:479] Verifying addon registry=true in "addons-289249"
	I1001 17:48:38.893072   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.893117   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.893129   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.893138   14186 addons.go:479] Verifying addon metrics-server=true in "addons-289249"
	I1001 17:48:38.893269   14186 out.go:179] * Verifying ingress addon...
	I1001 17:48:38.893443   14186 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-289249 service yakd-dashboard -n yakd-dashboard
	
	I1001 17:48:38.893512   14186 out.go:179] * Verifying registry addon...
	I1001 17:48:38.893530   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.889766   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.894726   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.894740   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.894750   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.893542   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.894812   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.894819   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.894826   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.893697   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.894905   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.893720   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.895095   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.895125   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.895131   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.896315   14186 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 17:48:38.896635   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.896666   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.896678   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:38.897979   14186 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 17:48:38.914931   14186 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 17:48:38.914953   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:38.916168   14186 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 17:48:38.916186   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:38.960261   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.960288   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.960672   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:38.960673   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.960699   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 17:48:38.960789   14186 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1001 17:48:38.986061   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:38.986082   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:38.986366   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:38.986385   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:39.136740   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:39.306213   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.598053428s)
	W1001 17:48:39.306267   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 17:48:39.306291   14186 retry.go:31] will retry after 268.189952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 17:48:39.306236   14186 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.731414582s)
	I1001 17:48:39.306314   14186 api_server.go:72] duration metric: took 8.802184401s to wait for apiserver process to appear ...
	I1001 17:48:39.306324   14186 api_server.go:88] waiting for apiserver healthz status ...
	I1001 17:48:39.306340   14186 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I1001 17:48:39.318685   14186 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I1001 17:48:39.322303   14186 api_server.go:141] control plane version: v1.34.1
	I1001 17:48:39.322328   14186 api_server.go:131] duration metric: took 15.997671ms to wait for apiserver health ...
	I1001 17:48:39.322337   14186 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 17:48:39.341807   14186 system_pods.go:59] 15 kube-system pods found
	I1001 17:48:39.341883   14186 system_pods.go:61] "amd-gpu-device-plugin-lj7zx" [458fbc9f-e3be-4fee-ab72-7f4935340a55] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1001 17:48:39.341896   14186 system_pods.go:61] "coredns-66bc5c9577-jr97w" [d2cae826-bcdd-4aca-8660-582a846c9a1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 17:48:39.341908   14186 system_pods.go:61] "coredns-66bc5c9577-x9ql9" [a98c76e8-0151-4a64-960b-088b4180e8be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 17:48:39.341914   14186 system_pods.go:61] "etcd-addons-289249" [69a8a433-3077-4beb-8e61-1189249b3718] Running
	I1001 17:48:39.341921   14186 system_pods.go:61] "kube-apiserver-addons-289249" [721984e9-d3b7-47d8-ae58-70bb106ca9a5] Running
	I1001 17:48:39.341930   14186 system_pods.go:61] "kube-controller-manager-addons-289249" [7c253923-8da5-413b-838e-22bdcc9d3fba] Running
	I1001 17:48:39.341939   14186 system_pods.go:61] "kube-ingress-dns-minikube" [2b3ad449-2a8f-46c0-b054-650022c2eaa2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 17:48:39.341948   14186 system_pods.go:61] "kube-proxy-qqv7b" [80c986ca-87d5-4a30-804f-3c19cd5fcfa5] Running
	I1001 17:48:39.341954   14186 system_pods.go:61] "kube-scheduler-addons-289249" [8fee9b27-d93d-491d-9553-56415509180f] Running
	I1001 17:48:39.341961   14186 system_pods.go:61] "metrics-server-85b7d694d7-9vxgb" [9266edab-0025-4ccb-8c18-124badd0f0db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 17:48:39.341972   14186 system_pods.go:61] "nvidia-device-plugin-daemonset-bwg47" [e9a22707-ec3b-4876-a345-51411014cf5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 17:48:39.341981   14186 system_pods.go:61] "registry-66898fdd98-l4mmr" [65f1723b-c2a7-4cd0-b4dd-56463fe8a7df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 17:48:39.341989   14186 system_pods.go:61] "registry-creds-764b6fb674-mcgqn" [95561aad-e45a-4959-9c3a-44255322620e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 17:48:39.342001   14186 system_pods.go:61] "registry-proxy-rzht2" [b910836a-896c-4366-866b-eea6834f1e7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 17:48:39.342012   14186 system_pods.go:61] "storage-provisioner" [4026b516-5215-4991-aaef-5899ce674e96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 17:48:39.342024   14186 system_pods.go:74] duration metric: took 19.68071ms to wait for pod list to return data ...
	I1001 17:48:39.342038   14186 default_sa.go:34] waiting for default service account to be created ...
	I1001 17:48:39.377211   14186 default_sa.go:45] found service account: "default"
	I1001 17:48:39.377236   14186 default_sa.go:55] duration metric: took 35.187382ms for default service account to be created ...
	I1001 17:48:39.377245   14186 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 17:48:39.426007   14186 system_pods.go:86] 17 kube-system pods found
	I1001 17:48:39.426044   14186 system_pods.go:89] "amd-gpu-device-plugin-lj7zx" [458fbc9f-e3be-4fee-ab72-7f4935340a55] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1001 17:48:39.426052   14186 system_pods.go:89] "coredns-66bc5c9577-jr97w" [d2cae826-bcdd-4aca-8660-582a846c9a1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 17:48:39.426058   14186 system_pods.go:89] "coredns-66bc5c9577-x9ql9" [a98c76e8-0151-4a64-960b-088b4180e8be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 17:48:39.426063   14186 system_pods.go:89] "etcd-addons-289249" [69a8a433-3077-4beb-8e61-1189249b3718] Running
	I1001 17:48:39.426068   14186 system_pods.go:89] "kube-apiserver-addons-289249" [721984e9-d3b7-47d8-ae58-70bb106ca9a5] Running
	I1001 17:48:39.426074   14186 system_pods.go:89] "kube-controller-manager-addons-289249" [7c253923-8da5-413b-838e-22bdcc9d3fba] Running
	I1001 17:48:39.426082   14186 system_pods.go:89] "kube-ingress-dns-minikube" [2b3ad449-2a8f-46c0-b054-650022c2eaa2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 17:48:39.426091   14186 system_pods.go:89] "kube-proxy-qqv7b" [80c986ca-87d5-4a30-804f-3c19cd5fcfa5] Running
	I1001 17:48:39.426097   14186 system_pods.go:89] "kube-scheduler-addons-289249" [8fee9b27-d93d-491d-9553-56415509180f] Running
	I1001 17:48:39.426105   14186 system_pods.go:89] "metrics-server-85b7d694d7-9vxgb" [9266edab-0025-4ccb-8c18-124badd0f0db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 17:48:39.426119   14186 system_pods.go:89] "nvidia-device-plugin-daemonset-bwg47" [e9a22707-ec3b-4876-a345-51411014cf5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 17:48:39.426128   14186 system_pods.go:89] "registry-66898fdd98-l4mmr" [65f1723b-c2a7-4cd0-b4dd-56463fe8a7df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 17:48:39.426138   14186 system_pods.go:89] "registry-creds-764b6fb674-mcgqn" [95561aad-e45a-4959-9c3a-44255322620e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 17:48:39.426145   14186 system_pods.go:89] "registry-proxy-rzht2" [b910836a-896c-4366-866b-eea6834f1e7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 17:48:39.426151   14186 system_pods.go:89] "snapshot-controller-7d9fbc56b8-88zts" [fb58af05-9499-44f7-a320-4bbea3cb96a7] Pending
	I1001 17:48:39.426156   14186 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pmbk2" [abbe756a-2b4a-4320-a48c-40fafc614f31] Pending
	I1001 17:48:39.426161   14186 system_pods.go:89] "storage-provisioner" [4026b516-5215-4991-aaef-5899ce674e96] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 17:48:39.426169   14186 system_pods.go:126] duration metric: took 48.919022ms to wait for k8s-apps to be running ...
	I1001 17:48:39.426177   14186 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 17:48:39.426230   14186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 17:48:39.448924   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:39.449126   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:39.575547   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 17:48:40.003248   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:40.003549   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:40.133830   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.606831262s)
	I1001 17:48:40.133897   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:40.133915   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:40.133927   14186 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.545486528s)
	I1001 17:48:40.134253   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:40.134271   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:40.134279   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:40.134279   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:40.134285   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:40.134521   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:40.134546   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:40.134557   14186 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-289249"
	I1001 17:48:40.134528   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:40.135288   14186 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 17:48:40.136763   14186 out.go:179] * Verifying csi-hostpath-driver addon...
	I1001 17:48:40.137945   14186 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1001 17:48:40.138587   14186 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 17:48:40.138982   14186 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 17:48:40.138996   14186 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 17:48:40.183190   14186 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 17:48:40.183212   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:40.279743   14186 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 17:48:40.279766   14186 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 17:48:40.404187   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:40.404212   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:40.458055   14186 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 17:48:40.458078   14186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 17:48:40.586331   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 17:48:40.644706   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:40.899640   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:40.904620   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:41.151388   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:41.401193   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:41.406558   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:41.644055   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:41.907058   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:41.907114   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:42.157078   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:42.417878   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:42.425949   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:42.485961   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.34917906s)
	I1001 17:48:42.485994   14186 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.059741383s)
	W1001 17:48:42.486010   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:42.486018   14186 system_svc.go:56] duration metric: took 3.059838008s WaitForService to wait for kubelet
	I1001 17:48:42.486034   14186 retry.go:31] will retry after 327.750036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:42.486029   14186 kubeadm.go:578] duration metric: took 11.98189788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 17:48:42.486051   14186 node_conditions.go:102] verifying NodePressure condition ...
	I1001 17:48:42.486105   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.910516016s)
	I1001 17:48:42.486157   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:42.486174   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:42.486183   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.899820676s)
	I1001 17:48:42.486218   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:42.486234   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:42.486475   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:42.486492   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:42.486516   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:42.486532   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:42.486639   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:42.486646   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:42.486652   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:42.486661   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:48:42.486667   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:48:42.486708   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:42.486717   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:42.486970   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:48:42.486990   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:48:42.487007   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:48:42.488027   14186 addons.go:479] Verifying addon gcp-auth=true in "addons-289249"
	I1001 17:48:42.489836   14186 out.go:179] * Verifying gcp-auth addon...
	I1001 17:48:42.491764   14186 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 17:48:42.495827   14186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 17:48:42.495855   14186 node_conditions.go:123] node cpu capacity is 2
	I1001 17:48:42.495870   14186 node_conditions.go:105] duration metric: took 9.81313ms to run NodePressure ...
	I1001 17:48:42.495890   14186 start.go:241] waiting for startup goroutines ...
	I1001 17:48:42.510915   14186 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 17:48:42.510944   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:42.646218   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:42.814411   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:42.903859   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:42.904038   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:43.003721   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:43.145233   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:43.408242   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:43.408463   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:43.509092   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:43.646017   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:43.906595   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:43.907280   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:44.007777   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:44.143329   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:44.239659   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.425185356s)
	W1001 17:48:44.239703   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:44.239727   14186 retry.go:31] will retry after 561.135985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:44.400245   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:44.401887   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:44.495003   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:44.642892   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:44.801042   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:44.901667   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:44.901748   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:44.995519   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:45.143518   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:45.401121   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:45.401941   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1001 17:48:45.469489   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:45.469523   14186 retry.go:31] will retry after 898.06822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:45.495741   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:45.642211   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:45.900142   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:45.902700   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:45.996901   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:46.143562   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:46.368646   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:46.407772   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:46.408144   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:46.498358   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:46.643589   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:46.900960   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:46.903758   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:46.996944   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:47.146847   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:47.402100   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:47.406697   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:47.497523   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:47.541696   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.173006349s)
	W1001 17:48:47.541738   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:47.541758   14186 retry.go:31] will retry after 696.792004ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:47.644153   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:47.902641   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:47.902764   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:47.996336   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:48.143077   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:48.239182   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:48.402275   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:48.403326   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:48.495518   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:48.647316   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:48.902761   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:48.905345   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:48.995016   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:49.144173   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:49.404417   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:49.405847   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:49.491595   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.252373567s)
	W1001 17:48:49.491653   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:49.491678   14186 retry.go:31] will retry after 2.734956732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:49.496060   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:49.647854   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:49.917785   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:49.920694   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:49.995732   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:50.144039   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:50.401881   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:50.405961   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:50.495194   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:50.728398   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:50.902871   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:50.903038   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:50.997004   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:51.145338   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:51.405606   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:51.407355   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:51.497968   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:51.643743   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:51.909260   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:51.909384   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:51.996284   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:52.143952   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:52.226788   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:52.400738   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:52.403414   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:52.495959   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:52.643689   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:52.905843   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:52.905889   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:52.995835   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:53.143834   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:53.313339   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086514828s)
	W1001 17:48:53.313392   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:53.313414   14186 retry.go:31] will retry after 2.527725511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:53.402988   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:53.405315   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:53.499781   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:53.645715   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:53.901483   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:53.904846   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:54.047187   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:54.292305   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:54.401681   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:54.401985   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:54.495743   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:54.643802   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:55.081060   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:55.084069   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:55.084209   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:55.143965   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:55.405148   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:55.407949   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:55.498459   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:55.644920   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:55.841604   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:48:55.901166   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:55.902101   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:55.995565   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:56.143035   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:56.404569   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:56.406672   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:56.496627   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:56.645589   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:56.984124   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:56.984213   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:56.988736   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.147086373s)
	W1001 17:48:56.988777   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:56.988798   14186 retry.go:31] will retry after 6.233552604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:48:56.998784   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:57.143768   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:57.402240   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:57.403826   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:57.495653   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:57.645728   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:57.901054   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:57.901581   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:57.995742   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:58.145468   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:58.400377   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:58.401836   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:58.495789   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:58.642081   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:58.899799   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:58.901544   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:59.227118   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:59.250370   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:59.400529   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:59.401553   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:59.495797   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:48:59.642930   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:48:59.901984   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:48:59.902609   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:48:59.997870   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:00.142608   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:00.402788   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:00.403072   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:00.503309   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:00.642516   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:00.901364   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:00.901595   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:00.996436   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:01.142779   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:01.401830   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:01.403313   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:01.495750   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:01.642248   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:01.899708   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:01.902309   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:01.995539   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:02.142583   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:02.400073   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:02.401872   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:02.494963   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:02.642713   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:02.904239   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:02.904335   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:02.996214   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:03.155200   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:03.223242   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:49:03.402720   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:03.402817   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:03.497721   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:03.645355   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:03.904967   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:03.907155   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:03.995271   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:04.143405   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:04.359865   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.136579131s)
	W1001 17:49:04.359920   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:04.359946   14186 retry.go:31] will retry after 6.967648216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:04.404293   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:04.406956   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:04.496482   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:04.645138   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:04.902739   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:04.902836   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:04.997082   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:05.142285   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:05.403214   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:05.403353   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:05.496558   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:05.927184   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:05.930706   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:05.930742   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:06.022896   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:06.143523   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:06.402649   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:06.404609   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:06.496016   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:06.644482   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:06.900360   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:06.902218   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:06.995467   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:07.143021   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:07.400855   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:07.401415   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:07.495381   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:07.643553   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:07.901743   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:07.901864   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:07.994953   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:08.142721   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:08.400052   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:08.400967   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:08.495182   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:08.644107   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:08.900032   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:08.902721   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:08.999549   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:09.145341   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:09.401310   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:09.406318   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:09.496380   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:09.645441   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:09.900969   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:09.902562   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:09.995067   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:10.142550   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:10.401409   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:10.401890   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:10.495308   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:10.643581   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:10.901183   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:10.901244   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:10.996388   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:11.143803   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:11.327810   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:49:11.408440   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:11.410515   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:11.496174   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:11.642707   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:11.899995   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:11.900883   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:11.995214   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1001 17:49:12.031070   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:12.031105   14186 retry.go:31] will retry after 9.354306513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:12.142586   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:12.400024   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:12.401666   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:12.495056   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:12.643202   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:12.903359   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:12.904157   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:12.999796   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:13.142282   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:13.403278   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:13.403818   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:13.496488   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:13.645412   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:13.900747   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:13.906225   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:14.002228   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:14.142950   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:14.401989   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 17:49:14.402034   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:14.502848   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:14.642795   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:14.900166   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:14.901122   14186 kapi.go:107] duration metric: took 36.003140751s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 17:49:14.995967   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:15.142421   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:15.400022   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:15.494934   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:15.642763   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:15.900410   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:15.995239   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:16.143271   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:16.400423   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:16.496394   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:16.644378   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:16.901266   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:16.996495   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:17.145087   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:17.403704   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:17.495944   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:17.643956   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:17.900969   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:17.997724   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:18.141717   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:18.400285   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:18.500446   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:18.643657   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:18.899314   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:18.997552   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:19.142976   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:19.401384   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:19.495785   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:19.642125   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:19.900830   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:19.994984   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:20.145097   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:20.400999   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:20.496051   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:20.646760   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:20.900120   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:20.994588   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:21.141975   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:21.386203   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:49:21.478982   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:21.500518   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:21.647178   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:21.903002   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:21.997242   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:22.144575   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:22.401107   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:22.454877   14186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.06861435s)
	W1001 17:49:22.454920   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:22.454942   14186 retry.go:31] will retry after 11.536437613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:22.497405   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:22.646372   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:22.902571   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:22.998699   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:23.143536   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:23.399708   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:23.843037   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:23.844873   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:24.006876   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:24.008999   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:24.144159   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:24.401875   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:24.495876   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:24.644754   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:24.904792   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:24.997558   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:25.145671   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:25.400451   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:25.495266   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:25.644133   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:25.902990   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:25.997021   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:26.145262   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:26.400467   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:26.499502   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:26.643914   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:26.900775   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:26.994557   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:27.141968   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:27.400599   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:27.495748   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:27.642402   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:27.900212   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:27.995770   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:28.141909   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:28.400951   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:28.498222   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:28.645261   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:28.900056   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:28.996046   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:29.145605   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:29.402772   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:29.499524   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:29.645002   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:29.903879   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:29.995788   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:30.141943   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:30.400704   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:30.497251   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:30.642864   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:30.901971   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:30.996374   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:31.149925   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:31.400356   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:31.495536   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:31.643884   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:31.900756   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:32.039245   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:32.143930   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:32.401028   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:32.495872   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:32.644310   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:32.900575   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:32.998512   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:33.144073   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:33.402234   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:33.498060   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:33.643483   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:33.904330   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:33.992562   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:49:34.003049   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:34.143294   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:34.399744   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:34.495134   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:34.642646   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 17:49:34.796637   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:34.796672   14186 retry.go:31] will retry after 30.64580058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:49:34.900363   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:34.995562   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:35.142462   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:35.400341   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:35.495509   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:35.642408   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:35.899290   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:35.995060   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:36.145351   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:36.400719   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:36.497734   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:36.642477   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:36.901706   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:37.018073   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:37.143483   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:37.400014   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:37.495522   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:37.644377   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:37.899953   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:37.995063   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:38.143547   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:38.401225   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:38.497068   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:38.645280   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:38.900546   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:38.996456   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:39.143619   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:39.402160   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:39.495502   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:39.643974   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:39.900176   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:39.995298   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:40.143539   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:40.399870   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:40.497784   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:40.645291   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:40.904408   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:40.995258   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:41.142901   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:41.401186   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:41.494836   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:41.642445   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:41.899391   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:41.995610   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:42.143101   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:42.401472   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:42.495709   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:42.642790   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:42.902988   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:42.995421   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:43.144664   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:43.402377   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:43.498820   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:43.643378   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:43.902234   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:43.996775   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:44.143471   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:44.400186   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:44.499948   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:44.645006   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:44.903041   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:44.994975   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:45.142259   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:45.403094   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:45.497916   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:45.642523   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:45.901826   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:45.997812   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:46.145892   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:46.402404   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:46.496260   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:46.643270   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:46.902788   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:46.997910   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:47.445071   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:47.446177   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:47.497998   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:47.643835   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:47.900017   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:47.996401   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:48.144132   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:48.403144   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:48.510927   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:48.644355   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:48.903549   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:49.007379   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:49.143410   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:49.400471   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:49.496236   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:49.649472   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:49.901574   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:49.995474   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:50.147148   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:50.404999   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:50.497103   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:50.645266   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:50.901645   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:51.001531   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:51.147275   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:51.402804   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:51.496333   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:51.644328   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:51.903157   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:51.995298   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:52.323796   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:52.403206   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:52.496250   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:52.644181   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:52.901039   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:52.996272   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:53.143625   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:53.520722   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:53.524652   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:53.643512   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:53.900290   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:53.999196   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:54.143502   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:54.401856   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:54.497792   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:54.644145   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:54.903676   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:55.001594   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:55.141929   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:55.401711   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:55.497133   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:55.645287   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:55.901784   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:56.234607   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:56.239781   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:56.402959   14186 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 17:49:56.504156   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:56.645336   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:56.903709   14186 kapi.go:107] duration metric: took 1m18.0073884s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 17:49:56.997311   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:57.142610   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:57.496528   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:57.643729   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:57.996781   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:58.144593   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:58.500985   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:58.646233   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:58.995316   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:59.152305   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:59.497936   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:49:59.644177   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:49:59.998945   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:00.144674   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:00.495561   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:00.643149   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:00.997464   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:01.145045   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:01.495916   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:01.646249   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:01.995839   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:02.142902   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:02.504554   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:02.644415   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:02.997739   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:03.145517   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:03.497119   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:03.643107   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 17:50:03.995626   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:04.143990   14186 kapi.go:107] duration metric: took 1m24.005400696s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 17:50:04.496253   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:04.995803   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:05.443391   14186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 17:50:05.499637   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:05.997464   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1001 17:50:06.114350   14186 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 17:50:06.114412   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:50:06.114440   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:50:06.114729   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:50:06.114737   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:50:06.114749   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 17:50:06.114761   14186 main.go:141] libmachine: Making call to close driver server
	I1001 17:50:06.114770   14186 main.go:141] libmachine: (addons-289249) Calling .Close
	I1001 17:50:06.114978   14186 main.go:141] libmachine: (addons-289249) DBG | Closing plugin on server side
	I1001 17:50:06.115020   14186 main.go:141] libmachine: Successfully made call to close driver server
	I1001 17:50:06.115032   14186 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 17:50:06.115119   14186 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1001 17:50:06.495372   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:06.995075   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:07.495495   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:07.995615   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:08.495450   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:08.995598   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:09.502174   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:09.996521   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:10.495536   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:10.995470   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:11.495038   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:11.995553   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:12.495498   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:12.995354   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:13.496207   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:13.995797   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:14.496872   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:14.995797   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:15.496065   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:15.995781   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:16.495712   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:16.995218   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:17.496001   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:17.995963   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:18.496120   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:18.995928   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:19.496880   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:19.995550   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:20.496073   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:20.996678   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:21.495540   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:21.994825   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:22.495506   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:22.995316   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:23.497144   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:23.995547   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:24.495530   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:24.995285   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:25.501003   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:25.995314   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:26.496523   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:26.995469   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:27.495586   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:27.995165   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:28.496130   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:28.996319   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:29.497605   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:29.995053   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:30.496133   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:30.995677   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:31.495297   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:31.996216   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:32.496135   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:32.996425   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:33.494792   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:33.996423   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:34.495636   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:34.995561   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:35.496350   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:35.995776   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:36.496079   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:36.997069   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:37.496114   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:37.995717   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:38.495875   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:38.995159   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:39.498020   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:39.995786   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:40.496461   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:40.995876   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:41.496279   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:41.999334   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:42.496086   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:42.996760   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:43.495162   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:43.995186   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:44.496679   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:44.995578   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:45.495764   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:45.995266   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:46.495949   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:46.996268   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:47.495503   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:47.995135   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:48.496111   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:48.995875   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:49.495948   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:49.995329   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:50.495346   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:50.995981   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:51.495889   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:51.995570   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:52.495045   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:52.996465   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:53.495191   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:53.996033   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:54.496520   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:54.996246   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:55.496917   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:55.995682   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:56.494896   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:56.995987   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:57.495824   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:57.996259   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:58.496374   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:58.995546   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:59.495492   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:50:59.995192   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:00.496052   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:00.996210   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:01.495789   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:02.001350   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:02.506129   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:02.998162   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:03.496354   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:03.995830   14186 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 17:51:04.496921   14186 kapi.go:107] duration metric: took 2m22.005153494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 17:51:04.498343   14186 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-289249 cluster.
	I1001 17:51:04.499446   14186 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 17:51:04.500546   14186 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 17:51:04.501737   14186 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, registry-creds, cloud-spanner, metrics-server, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1001 17:51:04.502857   14186 addons.go:514] duration metric: took 2m33.99870563s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner registry-creds cloud-spanner metrics-server amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1001 17:51:04.502904   14186 start.go:246] waiting for cluster config update ...
	I1001 17:51:04.502922   14186 start.go:255] writing updated cluster config ...
	I1001 17:51:04.503171   14186 ssh_runner.go:195] Run: rm -f paused
	I1001 17:51:04.511131   14186 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 17:51:04.515311   14186 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x9ql9" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.521139   14186 pod_ready.go:94] pod "coredns-66bc5c9577-x9ql9" is "Ready"
	I1001 17:51:04.521158   14186 pod_ready.go:86] duration metric: took 5.827443ms for pod "coredns-66bc5c9577-x9ql9" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.523325   14186 pod_ready.go:83] waiting for pod "etcd-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.529171   14186 pod_ready.go:94] pod "etcd-addons-289249" is "Ready"
	I1001 17:51:04.529190   14186 pod_ready.go:86] duration metric: took 5.849987ms for pod "etcd-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.531420   14186 pod_ready.go:83] waiting for pod "kube-apiserver-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.536103   14186 pod_ready.go:94] pod "kube-apiserver-addons-289249" is "Ready"
	I1001 17:51:04.536131   14186 pod_ready.go:86] duration metric: took 4.684142ms for pod "kube-apiserver-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.538448   14186 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:04.916838   14186 pod_ready.go:94] pod "kube-controller-manager-addons-289249" is "Ready"
	I1001 17:51:04.916878   14186 pod_ready.go:86] duration metric: took 378.405573ms for pod "kube-controller-manager-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:05.117508   14186 pod_ready.go:83] waiting for pod "kube-proxy-qqv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:05.515500   14186 pod_ready.go:94] pod "kube-proxy-qqv7b" is "Ready"
	I1001 17:51:05.515534   14186 pod_ready.go:86] duration metric: took 397.995996ms for pod "kube-proxy-qqv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:05.716209   14186 pod_ready.go:83] waiting for pod "kube-scheduler-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:06.115792   14186 pod_ready.go:94] pod "kube-scheduler-addons-289249" is "Ready"
	I1001 17:51:06.115817   14186 pod_ready.go:86] duration metric: took 399.58495ms for pod "kube-scheduler-addons-289249" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 17:51:06.115829   14186 pod_ready.go:40] duration metric: took 1.60467222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 17:51:06.158086   14186 start.go:620] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1001 17:51:06.160171   14186 out.go:179] * Done! kubectl is now configured to use "addons-289249" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.666607567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb7dea4c-00a5-41fe-ab70-e9e525de19eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.666960494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e272d8ffdc6becda97fbc032cb9864619b7663e89dd6684f36360c1bac3cbe4f,PodSandboxId:0586464c3a814ebbd155dc0656087d01253c27d13230f94bc69134170cbe423d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759341098240221998,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffba61f2-ccbf-4f04-a767-abbb659d470d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678d61c6c4b5d5c15551cf75ae51654afab431260ccf943e6d879079153326b,PodSandboxId:23e1c2f9994f1f9363b3b0e4b36cbaf12d7678d44136054e2e0c2b16ef100417,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759341070243719608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a65eee9-eb8a-4623-a414-70ae928f0499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e406cddda396a8aa8d95fbaa7d8e70073b1a4c7dd4b50820bf5d6186a3bc07a4,PodSandboxId:13315679e5363806482fbe3dd5304476d1039ababb4c2be25912ead2f8ce9423,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759340996389258199,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-pcnz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d99d78fb-79e1-457c-9929-b6c2bb798c8b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8b70c01398cab8ca3f4c930f68d02c5e3b983b507d005102bdba93bb45cc4c67,PodSandboxId:254a62d3cb1127aa1ca58f86248a9a2c5009870ab598dacc0246014a1c300797,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759340991470352311,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sjnj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c02f8af-d69b-4ca2-9b41-068f5c5c50bb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc81e8303245c3ecbec639ac872c12e301cc9cf67676ba26bd43b26a04b1801,PodSandboxId:64cd641321bacd9f891acfc74e24a1b5cccf4decbdda4e00d9d36de02d1e5004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd
086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759340982310914053,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-6sfsx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3c428d56-4ab5-4dc2-af3c-b06291b79dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75de1c025a6039069a51edfa634e0181752ef05a61a03414416a701a8867b028,PodSandboxId:03bbdc8cccd96ab94ff10b3caca582051b05068d672472e73a1c437aec548315,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759340977026021770,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dhnvg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed141d3b-7f93-4bcd-94e2-981f5db851a7,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0b008f037f575dec0a5d94217460c3f582faedbec32a3bc53f6cb7634269f1,PodSandboxId:7fd61deb51eab8bfe2c1bb12af1d991143fbb08b91e911b52f3dd781ff5d9126,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759340976897837900,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cf9hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1de1e9a5-5bdb-47c9-8c2c-eb37bda2abe3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbeb6fcafa94ccca191de67d94881e21c51f6bca7d6d673a596b716d18eb6f1e,PodSandboxId:c88f1fce38db125f395fa5472c7e46040d256e531b2a7c5b1a4f35bdeb5d0e47,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759340964167040033,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b3ad449-2a8f-46c0-b054-650022c2eaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9253f21eaac10a42d2ae5695fce0f9da9075d47128b790af383ca77c469f31,PodSandboxId:2e936982d5f67f3ee07441adcc9161009cb4
ee776a1b4eb3bd3bfe4760c3be24,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759340921689628110,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lj7zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 458fbc9f-e3be-4fee-ab72-7f4935340a55,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1045d22f139a84d56d4b17f9822acc5d2833b0cfbcc53d73780cdcef80cb60db,PodSandbo
xId:541e8a716bb1927c2bfecdb8e89f5fb721e73c131d1da45bd3c16c5682367a53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759340919671528908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4026b516-5215-4991-aaef-5899ce674e96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a78086548188e47af21981b4e0eff8ce4dea212b90ab5ac3818667655a95a,PodSandboxId:eef00fe4
0c7469a7d9575086f5a57100f763a4bce23b8b561782410ce92d9839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759340911696533160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9ql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98c76e8-0151-4a64-960b-088b4180e8be,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb27544e0326c6987ca380f0f2d4abc18b8e05222937055d3237fb416660e2c,PodSandboxId:1901bbd7cf6267c2a3a87ea8c419aef73e9690fc9f2a60d6c8d9de7775b269ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759340910979823625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qqv7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c986ca-87d5-4a30-804f-3c19cd5fcfa5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b821ebc9d0a232501b4bc480353cbdb43b1e9936988e6a2dad2a8a9881fd0c3,PodSandboxId:58c94176ebbc6971ab9f8b8a80e498a7e3230722fe3a71ee467970cdee7611cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759340899321702457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a6b9df952938ca93ea37ea39dc5ec4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fc3e000e20a757e9931297f0ac9363db6964d84fedd4d736049bc85ba342ce,PodSandboxId:b86b0a89761e36480c02d5c7016c8763d5b15094bd6f0f4e6b98961d16b6d0b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759340899309733422,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567495a756e5424bd9621e772873bf1e,},Annotations:map
[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f3f747b44efd8913bf7bbe7eb107c574cf1018d8d97549f290ba9719c81a8d,PodSandboxId:5eb83a0676adfd715f401b5484a4ccf18c15c139e0a39b9fa94aac9376d6cba7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759340899343437404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-289249,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b5ed8f33bb4fc4271766d60e332081,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89698b6c060f0d98d35471d43c283ea2c378e4737237cd40684d28125742957a,PodSandboxId:620fceedba45ed6a8a93640cf384f20dbd29e2a67ed447d804547c9da13a6996,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759340899269106854,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7390626efab189f7eb2770a1531d0557,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb7dea4c-00a5-41fe-ab70-e9e525de19eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.702091152Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.702573915Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.710116764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6effd73b-cc49-478b-87d0-35f433f19523 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.710418163Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6effd73b-cc49-478b-87d0-35f433f19523 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.711905212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d88eb41f-436a-4a5e-9bd6-e71ac0238520 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.713843667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759341240713816082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d88eb41f-436a-4a5e-9bd6-e71ac0238520 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.714748899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=752df327-eddd-4569-ab06-aa1648cd1739 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.714817048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=752df327-eddd-4569-ab06-aa1648cd1739 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.715191450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e272d8ffdc6becda97fbc032cb9864619b7663e89dd6684f36360c1bac3cbe4f,PodSandboxId:0586464c3a814ebbd155dc0656087d01253c27d13230f94bc69134170cbe423d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759341098240221998,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffba61f2-ccbf-4f04-a767-abbb659d470d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678d61c6c4b5d5c15551cf75ae51654afab431260ccf943e6d879079153326b,PodSandboxId:23e1c2f9994f1f9363b3b0e4b36cbaf12d7678d44136054e2e0c2b16ef100417,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759341070243719608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a65eee9-eb8a-4623-a414-70ae928f0499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e406cddda396a8aa8d95fbaa7d8e70073b1a4c7dd4b50820bf5d6186a3bc07a4,PodSandboxId:13315679e5363806482fbe3dd5304476d1039ababb4c2be25912ead2f8ce9423,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759340996389258199,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-pcnz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d99d78fb-79e1-457c-9929-b6c2bb798c8b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8b70c01398cab8ca3f4c930f68d02c5e3b983b507d005102bdba93bb45cc4c67,PodSandboxId:254a62d3cb1127aa1ca58f86248a9a2c5009870ab598dacc0246014a1c300797,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759340991470352311,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sjnj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c02f8af-d69b-4ca2-9b41-068f5c5c50bb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc81e8303245c3ecbec639ac872c12e301cc9cf67676ba26bd43b26a04b1801,PodSandboxId:64cd641321bacd9f891acfc74e24a1b5cccf4decbdda4e00d9d36de02d1e5004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd
086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759340982310914053,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-6sfsx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3c428d56-4ab5-4dc2-af3c-b06291b79dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75de1c025a6039069a51edfa634e0181752ef05a61a03414416a701a8867b028,PodSandboxId:03bbdc8cccd96ab94ff10b3caca582051b05068d672472e73a1c437aec548315,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759340977026021770,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dhnvg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed141d3b-7f93-4bcd-94e2-981f5db851a7,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0b008f037f575dec0a5d94217460c3f582faedbec32a3bc53f6cb7634269f1,PodSandboxId:7fd61deb51eab8bfe2c1bb12af1d991143fbb08b91e911b52f3dd781ff5d9126,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759340976897837900,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cf9hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1de1e9a5-5bdb-47c9-8c2c-eb37bda2abe3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbeb6fcafa94ccca191de67d94881e21c51f6bca7d6d673a596b716d18eb6f1e,PodSandboxId:c88f1fce38db125f395fa5472c7e46040d256e531b2a7c5b1a4f35bdeb5d0e47,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759340964167040033,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b3ad449-2a8f-46c0-b054-650022c2eaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9253f21eaac10a42d2ae5695fce0f9da9075d47128b790af383ca77c469f31,PodSandboxId:2e936982d5f67f3ee07441adcc9161009cb4
ee776a1b4eb3bd3bfe4760c3be24,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759340921689628110,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lj7zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 458fbc9f-e3be-4fee-ab72-7f4935340a55,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1045d22f139a84d56d4b17f9822acc5d2833b0cfbcc53d73780cdcef80cb60db,PodSandbo
xId:541e8a716bb1927c2bfecdb8e89f5fb721e73c131d1da45bd3c16c5682367a53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759340919671528908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4026b516-5215-4991-aaef-5899ce674e96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a78086548188e47af21981b4e0eff8ce4dea212b90ab5ac3818667655a95a,PodSandboxId:eef00fe4
0c7469a7d9575086f5a57100f763a4bce23b8b561782410ce92d9839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759340911696533160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9ql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98c76e8-0151-4a64-960b-088b4180e8be,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb27544e0326c6987ca380f0f2d4abc18b8e05222937055d3237fb416660e2c,PodSandboxId:1901bbd7cf6267c2a3a87ea8c419aef73e9690fc9f2a60d6c8d9de7775b269ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759340910979823625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qqv7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c986ca-87d5-4a30-804f-3c19cd5fcfa5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b821ebc9d0a232501b4bc480353cbdb43b1e9936988e6a2dad2a8a9881fd0c3,PodSandboxId:58c94176ebbc6971ab9f8b8a80e498a7e3230722fe3a71ee467970cdee7611cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759340899321702457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a6b9df952938ca93ea37ea39dc5ec4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fc3e000e20a757e9931297f0ac9363db6964d84fedd4d736049bc85ba342ce,PodSandboxId:b86b0a89761e36480c02d5c7016c8763d5b15094bd6f0f4e6b98961d16b6d0b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759340899309733422,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567495a756e5424bd9621e772873bf1e,},Annotations:map
[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f3f747b44efd8913bf7bbe7eb107c574cf1018d8d97549f290ba9719c81a8d,PodSandboxId:5eb83a0676adfd715f401b5484a4ccf18c15c139e0a39b9fa94aac9376d6cba7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759340899343437404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-289249,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b5ed8f33bb4fc4271766d60e332081,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89698b6c060f0d98d35471d43c283ea2c378e4737237cd40684d28125742957a,PodSandboxId:620fceedba45ed6a8a93640cf384f20dbd29e2a67ed447d804547c9da13a6996,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759340899269106854,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7390626efab189f7eb2770a1531d0557,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=752df327-eddd-4569-ab06-aa1648cd1739 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.751634705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdbfc430-e71c-42fe-9c8f-51b69f35c113 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.751728855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdbfc430-e71c-42fe-9c8f-51b69f35c113 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.753017701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8695a794-ea4d-418d-9ea2-7cef7eb66f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.754747582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759341240754714655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8695a794-ea4d-418d-9ea2-7cef7eb66f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.755436990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9a4a3cb-a2e7-4551-8001-6339ab7ab340 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.755522040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9a4a3cb-a2e7-4551-8001-6339ab7ab340 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.755955899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e272d8ffdc6becda97fbc032cb9864619b7663e89dd6684f36360c1bac3cbe4f,PodSandboxId:0586464c3a814ebbd155dc0656087d01253c27d13230f94bc69134170cbe423d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759341098240221998,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffba61f2-ccbf-4f04-a767-abbb659d470d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678d61c6c4b5d5c15551cf75ae51654afab431260ccf943e6d879079153326b,PodSandboxId:23e1c2f9994f1f9363b3b0e4b36cbaf12d7678d44136054e2e0c2b16ef100417,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759341070243719608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a65eee9-eb8a-4623-a414-70ae928f0499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e406cddda396a8aa8d95fbaa7d8e70073b1a4c7dd4b50820bf5d6186a3bc07a4,PodSandboxId:13315679e5363806482fbe3dd5304476d1039ababb4c2be25912ead2f8ce9423,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759340996389258199,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-pcnz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d99d78fb-79e1-457c-9929-b6c2bb798c8b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8b70c01398cab8ca3f4c930f68d02c5e3b983b507d005102bdba93bb45cc4c67,PodSandboxId:254a62d3cb1127aa1ca58f86248a9a2c5009870ab598dacc0246014a1c300797,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759340991470352311,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sjnj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c02f8af-d69b-4ca2-9b41-068f5c5c50bb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc81e8303245c3ecbec639ac872c12e301cc9cf67676ba26bd43b26a04b1801,PodSandboxId:64cd641321bacd9f891acfc74e24a1b5cccf4decbdda4e00d9d36de02d1e5004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd
086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759340982310914053,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-6sfsx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3c428d56-4ab5-4dc2-af3c-b06291b79dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75de1c025a6039069a51edfa634e0181752ef05a61a03414416a701a8867b028,PodSandboxId:03bbdc8cccd96ab94ff10b3caca582051b05068d672472e73a1c437aec548315,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759340977026021770,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dhnvg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed141d3b-7f93-4bcd-94e2-981f5db851a7,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0b008f037f575dec0a5d94217460c3f582faedbec32a3bc53f6cb7634269f1,PodSandboxId:7fd61deb51eab8bfe2c1bb12af1d991143fbb08b91e911b52f3dd781ff5d9126,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759340976897837900,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cf9hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1de1e9a5-5bdb-47c9-8c2c-eb37bda2abe3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbeb6fcafa94ccca191de67d94881e21c51f6bca7d6d673a596b716d18eb6f1e,PodSandboxId:c88f1fce38db125f395fa5472c7e46040d256e531b2a7c5b1a4f35bdeb5d0e47,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759340964167040033,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b3ad449-2a8f-46c0-b054-650022c2eaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9253f21eaac10a42d2ae5695fce0f9da9075d47128b790af383ca77c469f31,PodSandboxId:2e936982d5f67f3ee07441adcc9161009cb4
ee776a1b4eb3bd3bfe4760c3be24,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759340921689628110,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lj7zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 458fbc9f-e3be-4fee-ab72-7f4935340a55,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1045d22f139a84d56d4b17f9822acc5d2833b0cfbcc53d73780cdcef80cb60db,PodSandbo
xId:541e8a716bb1927c2bfecdb8e89f5fb721e73c131d1da45bd3c16c5682367a53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759340919671528908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4026b516-5215-4991-aaef-5899ce674e96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a78086548188e47af21981b4e0eff8ce4dea212b90ab5ac3818667655a95a,PodSandboxId:eef00fe4
0c7469a7d9575086f5a57100f763a4bce23b8b561782410ce92d9839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759340911696533160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9ql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98c76e8-0151-4a64-960b-088b4180e8be,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb27544e0326c6987ca380f0f2d4abc18b8e05222937055d3237fb416660e2c,PodSandboxId:1901bbd7cf6267c2a3a87ea8c419aef73e9690fc9f2a60d6c8d9de7775b269ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759340910979823625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qqv7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c986ca-87d5-4a30-804f-3c19cd5fcfa5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b821ebc9d0a232501b4bc480353cbdb43b1e9936988e6a2dad2a8a9881fd0c3,PodSandboxId:58c94176ebbc6971ab9f8b8a80e498a7e3230722fe3a71ee467970cdee7611cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759340899321702457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a6b9df952938ca93ea37ea39dc5ec4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fc3e000e20a757e9931297f0ac9363db6964d84fedd4d736049bc85ba342ce,PodSandboxId:b86b0a89761e36480c02d5c7016c8763d5b15094bd6f0f4e6b98961d16b6d0b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759340899309733422,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567495a756e5424bd9621e772873bf1e,},Annotations:map
[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f3f747b44efd8913bf7bbe7eb107c574cf1018d8d97549f290ba9719c81a8d,PodSandboxId:5eb83a0676adfd715f401b5484a4ccf18c15c139e0a39b9fa94aac9376d6cba7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759340899343437404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-289249,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b5ed8f33bb4fc4271766d60e332081,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89698b6c060f0d98d35471d43c283ea2c378e4737237cd40684d28125742957a,PodSandboxId:620fceedba45ed6a8a93640cf384f20dbd29e2a67ed447d804547c9da13a6996,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759340899269106854,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7390626efab189f7eb2770a1531d0557,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9a4a3cb-a2e7-4551-8001-6339ab7ab340 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.796599860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f2cafd6-9023-479e-aeb6-691859cd7b38 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.796676270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f2cafd6-9023-479e-aeb6-691859cd7b38 name=/runtime.v1.RuntimeService/Version
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.798457161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=243f54bd-383a-4eb4-ae4e-09c780466d78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.799805452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759341240799771842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=243f54bd-383a-4eb4-ae4e-09c780466d78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.800903007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b8a4d22-38e7-4c5a-82ce-57972b650b08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.801041064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b8a4d22-38e7-4c5a-82ce-57972b650b08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 17:54:00 addons-289249 crio[820]: time="2025-10-01 17:54:00.801653566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e272d8ffdc6becda97fbc032cb9864619b7663e89dd6684f36360c1bac3cbe4f,PodSandboxId:0586464c3a814ebbd155dc0656087d01253c27d13230f94bc69134170cbe423d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759341098240221998,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffba61f2-ccbf-4f04-a767-abbb659d470d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678d61c6c4b5d5c15551cf75ae51654afab431260ccf943e6d879079153326b,PodSandboxId:23e1c2f9994f1f9363b3b0e4b36cbaf12d7678d44136054e2e0c2b16ef100417,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759341070243719608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a65eee9-eb8a-4623-a414-70ae928f0499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e406cddda396a8aa8d95fbaa7d8e70073b1a4c7dd4b50820bf5d6186a3bc07a4,PodSandboxId:13315679e5363806482fbe3dd5304476d1039ababb4c2be25912ead2f8ce9423,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759340996389258199,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-pcnz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d99d78fb-79e1-457c-9929-b6c2bb798c8b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8b70c01398cab8ca3f4c930f68d02c5e3b983b507d005102bdba93bb45cc4c67,PodSandboxId:254a62d3cb1127aa1ca58f86248a9a2c5009870ab598dacc0246014a1c300797,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759340991470352311,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sjnj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c02f8af-d69b-4ca2-9b41-068f5c5c50bb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc81e8303245c3ecbec639ac872c12e301cc9cf67676ba26bd43b26a04b1801,PodSandboxId:64cd641321bacd9f891acfc74e24a1b5cccf4decbdda4e00d9d36de02d1e5004,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd
086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759340982310914053,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-6sfsx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3c428d56-4ab5-4dc2-af3c-b06291b79dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75de1c025a6039069a51edfa634e0181752ef05a61a03414416a701a8867b028,PodSandboxId:03bbdc8cccd96ab94ff10b3caca582051b05068d672472e73a1c437aec548315,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759340977026021770,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dhnvg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed141d3b-7f93-4bcd-94e2-981f5db851a7,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0b008f037f575dec0a5d94217460c3f582faedbec32a3bc53f6cb7634269f1,PodSandboxId:7fd61deb51eab8bfe2c1bb12af1d991143fbb08b91e911b52f3dd781ff5d9126,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759340976897837900,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cf9hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1de1e9a5-5bdb-47c9-8c2c-eb37bda2abe3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbeb6fcafa94ccca191de67d94881e21c51f6bca7d6d673a596b716d18eb6f1e,PodSandboxId:c88f1fce38db125f395fa5472c7e46040d256e531b2a7c5b1a4f35bdeb5d0e47,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759340964167040033,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b3ad449-2a8f-46c0-b054-650022c2eaa2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9253f21eaac10a42d2ae5695fce0f9da9075d47128b790af383ca77c469f31,PodSandboxId:2e936982d5f67f3ee07441adcc9161009cb4
ee776a1b4eb3bd3bfe4760c3be24,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759340921689628110,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lj7zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 458fbc9f-e3be-4fee-ab72-7f4935340a55,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1045d22f139a84d56d4b17f9822acc5d2833b0cfbcc53d73780cdcef80cb60db,PodSandbo
xId:541e8a716bb1927c2bfecdb8e89f5fb721e73c131d1da45bd3c16c5682367a53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759340919671528908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4026b516-5215-4991-aaef-5899ce674e96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319a78086548188e47af21981b4e0eff8ce4dea212b90ab5ac3818667655a95a,PodSandboxId:eef00fe4
0c7469a7d9575086f5a57100f763a4bce23b8b561782410ce92d9839,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759340911696533160,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9ql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98c76e8-0151-4a64-960b-088b4180e8be,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb27544e0326c6987ca380f0f2d4abc18b8e05222937055d3237fb416660e2c,PodSandboxId:1901bbd7cf6267c2a3a87ea8c419aef73e9690fc9f2a60d6c8d9de7775b269ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759340910979823625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qqv7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c986ca-87d5-4a30-804f-3c19cd5fcfa5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b821ebc9d0a232501b4bc480353cbdb43b1e9936988e6a2dad2a8a9881fd0c3,PodSandboxId:58c94176ebbc6971ab9f8b8a80e498a7e3230722fe3a71ee467970cdee7611cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759340899321702457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a6b9df952938ca93ea37ea39dc5ec4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fc3e000e20a757e9931297f0ac9363db6964d84fedd4d736049bc85ba342ce,PodSandboxId:b86b0a89761e36480c02d5c7016c8763d5b15094bd6f0f4e6b98961d16b6d0b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759340899309733422,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567495a756e5424bd9621e772873bf1e,},Annotations:map
[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f3f747b44efd8913bf7bbe7eb107c574cf1018d8d97549f290ba9719c81a8d,PodSandboxId:5eb83a0676adfd715f401b5484a4ccf18c15c139e0a39b9fa94aac9376d6cba7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759340899343437404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-289249,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b5ed8f33bb4fc4271766d60e332081,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89698b6c060f0d98d35471d43c283ea2c378e4737237cd40684d28125742957a,PodSandboxId:620fceedba45ed6a8a93640cf384f20dbd29e2a67ed447d804547c9da13a6996,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759340899269106854,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-289249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7390626efab189f7eb2770a1531d0557,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b8a4d22-38e7-4c5a-82ce-57972b650b08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e272d8ffdc6be       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   0586464c3a814       nginx
	1678d61c6c4b5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   23e1c2f9994f1       busybox
	e406cddda396a       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago       Running             controller                0                   13315679e5363       ingress-nginx-controller-9cc49f96f-pcnz5
	8b70c01398cab       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     2                   254a62d3cb112       ingress-nginx-admission-patch-sjnj8
	dcc81e8303245       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   64cd641321bac       gadget-6sfsx
	75de1c025a603       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   03bbdc8cccd96       ingress-nginx-admission-create-dhnvg
	7a0b008f037f5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   7fd61deb51eab       local-path-provisioner-648f6765c9-cf9hr
	fbeb6fcafa94c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   c88f1fce38db1       kube-ingress-dns-minikube
	cd9253f21eaac       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   2e936982d5f67       amd-gpu-device-plugin-lj7zx
	1045d22f139a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   541e8a716bb19       storage-provisioner
	319a780865481       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   eef00fe40c746       coredns-66bc5c9577-x9ql9
	bfb27544e0326       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   1901bbd7cf626       kube-proxy-qqv7b
	b2f3f747b44ef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   5eb83a0676adf       kube-scheduler-addons-289249
	6b821ebc9d0a2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   58c94176ebbc6       etcd-addons-289249
	05fc3e000e20a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   b86b0a89761e3       kube-apiserver-addons-289249
	89698b6c060f0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   620fceedba45e       kube-controller-manager-addons-289249
	
	
	==> coredns [319a78086548188e47af21981b4e0eff8ce4dea212b90ab5ac3818667655a95a] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37573 - 14706 "HINFO IN 1217678789446226510.7446342665983736423. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028360021s
	[INFO] 10.244.0.23:33635 - 16329 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000528431s
	[INFO] 10.244.0.23:54332 - 40688 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155128s
	[INFO] 10.244.0.23:48584 - 10118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118628s
	[INFO] 10.244.0.23:48773 - 7007 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000388845s
	[INFO] 10.244.0.23:42598 - 8384 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107362s
	[INFO] 10.244.0.23:33366 - 9906 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074011s
	[INFO] 10.244.0.23:59681 - 38435 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001009408s
	[INFO] 10.244.0.23:34778 - 37157 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001536681s
	[INFO] 10.244.0.27:57084 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000606969s
	[INFO] 10.244.0.27:48132 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154872s
	
	
	==> describe nodes <==
	Name:               addons-289249
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-289249
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=addons-289249
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T17_48_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-289249
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 17:48:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-289249
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 17:53:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 17:52:31 +0000   Wed, 01 Oct 2025 17:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 17:52:31 +0000   Wed, 01 Oct 2025 17:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 17:52:31 +0000   Wed, 01 Oct 2025 17:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 17:52:31 +0000   Wed, 01 Oct 2025 17:48:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    addons-289249
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf149cde4f8a4282b129ffa0c4de9a2d
	  System UUID:                bf149cde-4f8a-4282-b129-ffa0c4de9a2d
	  Boot ID:                    ed13bc54-8258-411e-9cd9-d74e831d10dc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-5d498dc89-bgfkz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-6sfsx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pcnz5    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m23s
	  kube-system                 amd-gpu-device-plugin-lj7zx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 coredns-66bc5c9577-x9ql9                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m31s
	  kube-system                 etcd-addons-289249                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m38s
	  kube-system                 kube-apiserver-addons-289249                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-addons-289249       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-qqv7b                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-addons-289249                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  local-path-storage          local-path-provisioner-648f6765c9-cf9hr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node addons-289249 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node addons-289249 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node addons-289249 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m36s                  kubelet          Node addons-289249 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s                  kubelet          Node addons-289249 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s                  kubelet          Node addons-289249 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m35s                  kubelet          Node addons-289249 status is now: NodeReady
	  Normal  RegisteredNode           5m32s                  node-controller  Node addons-289249 event: Registered Node addons-289249 in Controller
	
	
	==> dmesg <==
	[  +6.296197] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.682360] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.813567] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.980227] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.114750] kauditd_printk_skb: 72 callbacks suppressed
	[  +1.444173] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.010243] kauditd_printk_skb: 56 callbacks suppressed
	[  +3.604658] kauditd_printk_skb: 91 callbacks suppressed
	[Oct 1 17:50] kauditd_printk_skb: 67 callbacks suppressed
	[  +7.669571] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.000727] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 1 17:51] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.162150] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.907339] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.931937] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.636218] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000076] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.024918] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.943415] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.817414] kauditd_printk_skb: 67 callbacks suppressed
	[Oct 1 17:52] kauditd_printk_skb: 54 callbacks suppressed
	[  +0.000062] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.651830] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 1 17:53] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [6b821ebc9d0a232501b4bc480353cbdb43b1e9936988e6a2dad2a8a9881fd0c3] <==
	{"level":"info","ts":"2025-10-01T17:49:53.512795Z","caller":"traceutil/trace.go:172","msg":"trace[1483685860] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"166.820462ms","start":"2025-10-01T17:49:53.345963Z","end":"2025-10-01T17:49:53.512783Z","steps":["trace[1483685860] 'process raft request'  (duration: 166.69942ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:49:53.513056Z","caller":"traceutil/trace.go:172","msg":"trace[1851222687] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1171; }","duration":"118.91231ms","start":"2025-10-01T17:49:53.394134Z","end":"2025-10-01T17:49:53.513046Z","steps":["trace[1851222687] 'read index received'  (duration: 118.909339ms)","trace[1851222687] 'applied index is now lower than readState.Index'  (duration: 2.585µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T17:49:53.513284Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.128216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:49:53.513315Z","caller":"traceutil/trace.go:172","msg":"trace[1214900339] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"119.175211ms","start":"2025-10-01T17:49:53.394131Z","end":"2025-10-01T17:49:53.513306Z","steps":["trace[1214900339] 'agreement among raft nodes before linearized reading'  (duration: 118.988338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:49:53.518785Z","caller":"traceutil/trace.go:172","msg":"trace[1574844348] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"143.220102ms","start":"2025-10-01T17:49:53.375551Z","end":"2025-10-01T17:49:53.518771Z","steps":["trace[1574844348] 'process raft request'  (duration: 142.551567ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:49:56.228574Z","caller":"traceutil/trace.go:172","msg":"trace[127495463] linearizableReadLoop","detail":"{readStateIndex:1186; appliedIndex:1186; }","duration":"239.209525ms","start":"2025-10-01T17:49:55.989344Z","end":"2025-10-01T17:49:56.228553Z","steps":["trace[127495463] 'read index received'  (duration: 239.20357ms)","trace[127495463] 'applied index is now lower than readState.Index'  (duration: 5.319µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T17:49:56.228708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.349843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:49:56.228727Z","caller":"traceutil/trace.go:172","msg":"trace[1522799797] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"239.383044ms","start":"2025-10-01T17:49:55.989339Z","end":"2025-10-01T17:49:56.228722Z","steps":["trace[1522799797] 'agreement among raft nodes before linearized reading'  (duration: 239.32055ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:49:56.230728Z","caller":"traceutil/trace.go:172","msg":"trace[1522083628] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"293.129321ms","start":"2025-10-01T17:49:55.937585Z","end":"2025-10-01T17:49:56.230714Z","steps":["trace[1522083628] 'process raft request'  (duration: 291.357671ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:50:32.673532Z","caller":"traceutil/trace.go:172","msg":"trace[403934928] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"100.553342ms","start":"2025-10-01T17:50:32.572944Z","end":"2025-10-01T17:50:32.673497Z","steps":["trace[403934928] 'process raft request'  (duration: 100.46303ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:51:31.245615Z","caller":"traceutil/trace.go:172","msg":"trace[1092395243] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"183.257731ms","start":"2025-10-01T17:51:31.062263Z","end":"2025-10-01T17:51:31.245521Z","steps":["trace[1092395243] 'process raft request'  (duration: 182.923586ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:51:33.302108Z","caller":"traceutil/trace.go:172","msg":"trace[685677902] linearizableReadLoop","detail":"{readStateIndex:1539; appliedIndex:1539; }","duration":"139.7052ms","start":"2025-10-01T17:51:33.162378Z","end":"2025-10-01T17:51:33.302083Z","steps":["trace[685677902] 'read index received'  (duration: 139.700159ms)","trace[685677902] 'applied index is now lower than readState.Index'  (duration: 4.335µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T17:51:33.302285Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.8653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:51:33.302325Z","caller":"traceutil/trace.go:172","msg":"trace[1890729799] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1477; }","duration":"139.943239ms","start":"2025-10-01T17:51:33.162374Z","end":"2025-10-01T17:51:33.302317Z","steps":["trace[1890729799] 'agreement among raft nodes before linearized reading'  (duration: 139.835946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:51:33.302286Z","caller":"traceutil/trace.go:172","msg":"trace[890358868] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1478; }","duration":"155.147637ms","start":"2025-10-01T17:51:33.147129Z","end":"2025-10-01T17:51:33.302276Z","steps":["trace[890358868] 'process raft request'  (duration: 155.055806ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:51:55.486456Z","caller":"traceutil/trace.go:172","msg":"trace[1338728539] transaction","detail":"{read_only:false; response_revision:1674; number_of_response:1; }","duration":"149.967169ms","start":"2025-10-01T17:51:55.336456Z","end":"2025-10-01T17:51:55.486423Z","steps":["trace[1338728539] 'process raft request'  (duration: 149.884324ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T17:51:56.816742Z","caller":"traceutil/trace.go:172","msg":"trace[623476747] transaction","detail":"{read_only:false; response_revision:1676; number_of_response:1; }","duration":"343.357617ms","start":"2025-10-01T17:51:56.473371Z","end":"2025-10-01T17:51:56.816729Z","steps":["trace[623476747] 'process raft request'  (duration: 343.247519ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T17:51:56.816932Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T17:51:56.473348Z","time spent":"343.459446ms","remote":"127.0.0.1:47158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-jvil7fx6cql3htvjyhoor3p2rq\" mod_revision:1600 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-jvil7fx6cql3htvjyhoor3p2rq\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-jvil7fx6cql3htvjyhoor3p2rq\" > >"}
	{"level":"info","ts":"2025-10-01T17:51:56.817116Z","caller":"traceutil/trace.go:172","msg":"trace[1167716985] linearizableReadLoop","detail":"{readStateIndex:1745; appliedIndex:1745; }","duration":"237.252035ms","start":"2025-10-01T17:51:56.579846Z","end":"2025-10-01T17:51:56.817098Z","steps":["trace[1167716985] 'read index received'  (duration: 236.67666ms)","trace[1167716985] 'applied index is now lower than readState.Index'  (duration: 6.385µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T17:51:56.817391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.512422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/servicecidrs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:51:56.817458Z","caller":"traceutil/trace.go:172","msg":"trace[12091051] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:1676; }","duration":"237.584894ms","start":"2025-10-01T17:51:56.579866Z","end":"2025-10-01T17:51:56.817451Z","steps":["trace[12091051] 'agreement among raft nodes before linearized reading'  (duration: 237.490228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T17:51:56.817585Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.728374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:51:56.817689Z","caller":"traceutil/trace.go:172","msg":"trace[207139906] range","detail":"{range_begin:/registry/persistentvolumeclaims; range_end:; response_count:0; response_revision:1676; }","duration":"237.839579ms","start":"2025-10-01T17:51:56.579842Z","end":"2025-10-01T17:51:56.817681Z","steps":["trace[207139906] 'agreement among raft nodes before linearized reading'  (duration: 237.4356ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T17:51:56.817726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.03346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T17:51:56.817897Z","caller":"traceutil/trace.go:172","msg":"trace[774988979] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1676; }","duration":"212.219376ms","start":"2025-10-01T17:51:56.605670Z","end":"2025-10-01T17:51:56.817890Z","steps":["trace[774988979] 'agreement among raft nodes before linearized reading'  (duration: 212.041077ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:54:01 up 6 min,  0 users,  load average: 0.30, 0.84, 0.50
	Linux addons-289249 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [05fc3e000e20a757e9931297f0ac9363db6964d84fedd4d736049bc85ba342ce] <==
	 > logger="UnhandledError"
	E1001 17:49:28.215563       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.137.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.137.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.137.141:443: connect: connection refused" logger="UnhandledError"
	E1001 17:49:28.221297       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.137.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.137.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.137.141:443: connect: connection refused" logger="UnhandledError"
	I1001 17:49:28.284641       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 17:51:17.933522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.98:8443->192.168.39.1:40832: use of closed network connection
	E1001 17:51:18.109307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.98:8443->192.168.39.1:40862: use of closed network connection
	I1001 17:51:27.324201       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.181.60"}
	I1001 17:51:33.684018       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 17:51:33.929453       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.209.235"}
	I1001 17:52:03.872085       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1001 17:52:27.771386       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 17:52:27.771474       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 17:52:27.817525       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 17:52:27.821616       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 17:52:27.827690       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 17:52:27.827772       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 17:52:27.877752       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 17:52:27.877817       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 17:52:27.961278       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 17:52:27.961366       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 17:52:28.828567       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 17:52:28.961330       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 17:52:29.071246       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 17:52:29.246736       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1001 17:53:59.419746       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.155.163"}
	
	
	==> kube-controller-manager [89698b6c060f0d98d35471d43c283ea2c378e4737237cd40684d28125742957a] <==
	E1001 17:52:33.079100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:35.430370       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:35.431740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:37.472909       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:37.473913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:38.273586       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:38.274762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:43.965041       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:43.966225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:46.630699       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:46.631944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:52:49.111185       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:52:49.112124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:00.279283       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:00.280260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:11.159595       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:11.161231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:14.445625       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:14.446686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:41.889755       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:41.890875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:44.969769       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:44.970969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 17:53:53.208202       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 17:53:53.209280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [bfb27544e0326c6987ca380f0f2d4abc18b8e05222937055d3237fb416660e2c] <==
	I1001 17:48:31.441030       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 17:48:31.541947       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 17:48:31.544272       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.98"]
	E1001 17:48:31.552271       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 17:48:31.723624       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1001 17:48:31.723724       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 17:48:31.723752       1 server_linux.go:132] "Using iptables Proxier"
	I1001 17:48:31.738671       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 17:48:31.740263       1 server.go:527] "Version info" version="v1.34.1"
	I1001 17:48:31.740308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 17:48:31.750548       1 config.go:200] "Starting service config controller"
	I1001 17:48:31.750578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 17:48:31.750595       1 config.go:106] "Starting endpoint slice config controller"
	I1001 17:48:31.750598       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 17:48:31.750609       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 17:48:31.750612       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 17:48:31.750960       1 config.go:309] "Starting node config controller"
	I1001 17:48:31.750985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 17:48:31.750991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 17:48:31.850925       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1001 17:48:31.850949       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 17:48:31.850977       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b2f3f747b44efd8913bf7bbe7eb107c574cf1018d8d97549f290ba9719c81a8d] <==
	I1001 17:48:22.779991       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 17:48:22.783871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 17:48:22.784218       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 17:48:22.784247       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 17:48:22.784266       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1001 17:48:22.798359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1001 17:48:22.798581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1001 17:48:22.798647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1001 17:48:22.801836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1001 17:48:22.801917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1001 17:48:22.802016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1001 17:48:22.802090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1001 17:48:22.802224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1001 17:48:22.802292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1001 17:48:22.804109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1001 17:48:22.804221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1001 17:48:22.804264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1001 17:48:22.804390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1001 17:48:22.804445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1001 17:48:22.804491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1001 17:48:22.805907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1001 17:48:22.805970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1001 17:48:22.806074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1001 17:48:22.808697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1001 17:48:23.984323       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 01 17:52:30 addons-289249 kubelet[1501]: I1001 17:52:30.936274    1501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99bdffa34c7dd96b07a960cd06ba69d940103c9a38a808b3a3eaf7949ae1948"} err="failed to get container status \"a99bdffa34c7dd96b07a960cd06ba69d940103c9a38a808b3a3eaf7949ae1948\": rpc error: code = NotFound desc = could not find container \"a99bdffa34c7dd96b07a960cd06ba69d940103c9a38a808b3a3eaf7949ae1948\": container with ID starting with a99bdffa34c7dd96b07a960cd06ba69d940103c9a38a808b3a3eaf7949ae1948 not found: ID does not exist"
	Oct 01 17:52:31 addons-289249 kubelet[1501]: I1001 17:52:31.104069    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6876951a-c770-42b6-8a2d-5ee04deaf653" path="/var/lib/kubelet/pods/6876951a-c770-42b6-8a2d-5ee04deaf653/volumes"
	Oct 01 17:52:31 addons-289249 kubelet[1501]: I1001 17:52:31.104705    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5d5abd-7366-4bc3-829b-c680cdface9a" path="/var/lib/kubelet/pods/7d5d5abd-7366-4bc3-829b-c680cdface9a/volumes"
	Oct 01 17:52:31 addons-289249 kubelet[1501]: I1001 17:52:31.105002    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2574f93-f00a-46a2-8030-ec763407d546" path="/var/lib/kubelet/pods/b2574f93-f00a-46a2-8030-ec763407d546/volumes"
	Oct 01 17:52:35 addons-289249 kubelet[1501]: E1001 17:52:35.271287    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341155270199416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:52:35 addons-289249 kubelet[1501]: E1001 17:52:35.271318    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341155270199416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:52:45 addons-289249 kubelet[1501]: E1001 17:52:45.275229    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341165274573510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:52:45 addons-289249 kubelet[1501]: E1001 17:52:45.275269    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341165274573510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:52:55 addons-289249 kubelet[1501]: E1001 17:52:55.278320    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341175277686955  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:52:55 addons-289249 kubelet[1501]: E1001 17:52:55.278351    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341175277686955  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:05 addons-289249 kubelet[1501]: E1001 17:53:05.280894    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341185280328279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:05 addons-289249 kubelet[1501]: E1001 17:53:05.280958    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341185280328279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:15 addons-289249 kubelet[1501]: E1001 17:53:15.283074    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341195282787969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:15 addons-289249 kubelet[1501]: E1001 17:53:15.283098    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341195282787969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:25 addons-289249 kubelet[1501]: E1001 17:53:25.286526    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341205285083521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:25 addons-289249 kubelet[1501]: E1001 17:53:25.287212    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341205285083521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:35 addons-289249 kubelet[1501]: E1001 17:53:35.293428    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341215292874575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:35 addons-289249 kubelet[1501]: E1001 17:53:35.293456    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341215292874575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:42 addons-289249 kubelet[1501]: I1001 17:53:42.100904    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lj7zx" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 17:53:45 addons-289249 kubelet[1501]: I1001 17:53:45.101298    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 17:53:45 addons-289249 kubelet[1501]: E1001 17:53:45.297595    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341225296496716  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:45 addons-289249 kubelet[1501]: E1001 17:53:45.297636    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341225296496716  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:55 addons-289249 kubelet[1501]: E1001 17:53:55.300625    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759341235299927874  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:55 addons-289249 kubelet[1501]: E1001 17:53:55.300662    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759341235299927874  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 01 17:53:59 addons-289249 kubelet[1501]: I1001 17:53:59.379925    1501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2j8\" (UniqueName: \"kubernetes.io/projected/2d6fcb50-ba10-46b1-a504-b32e421ad03e-kube-api-access-8m2j8\") pod \"hello-world-app-5d498dc89-bgfkz\" (UID: \"2d6fcb50-ba10-46b1-a504-b32e421ad03e\") " pod="default/hello-world-app-5d498dc89-bgfkz"
	
	
	==> storage-provisioner [1045d22f139a84d56d4b17f9822acc5d2833b0cfbcc53d73780cdcef80cb60db] <==
	W1001 17:53:36.105568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:38.109441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:38.114290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:40.117574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:40.125096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:42.130343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:42.136036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:44.140088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:44.145181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:46.148517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:46.159468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:48.164342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:48.169349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:50.172977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:50.180119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:52.184378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:52.191395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:54.195689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:54.202930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:56.206008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:56.211968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:58.215116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:53:58.222927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:54:00.226264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 17:54:00.233079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-289249 -n addons-289249
helpers_test.go:269: (dbg) Run:  kubectl --context addons-289249 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-bgfkz ingress-nginx-admission-create-dhnvg ingress-nginx-admission-patch-sjnj8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-289249 describe pod hello-world-app-5d498dc89-bgfkz ingress-nginx-admission-create-dhnvg ingress-nginx-admission-patch-sjnj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-289249 describe pod hello-world-app-5d498dc89-bgfkz ingress-nginx-admission-create-dhnvg ingress-nginx-admission-patch-sjnj8: exit status 1 (82.898099ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-bgfkz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-289249/192.168.39.98
	Start Time:       Wed, 01 Oct 2025 17:53:59 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8m2j8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8m2j8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-bgfkz to addons-289249
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dhnvg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sjnj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-289249 describe pod hello-world-app-5d498dc89-bgfkz ingress-nginx-admission-create-dhnvg ingress-nginx-admission-patch-sjnj8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable ingress-dns --alsologtostderr -v=1: (1.493350107s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable ingress --alsologtostderr -v=1: (7.769150915s)
--- FAIL: TestAddons/parallel/Ingress (157.92s)

                                                
                                    
x
+
TestCertExpiration (1077.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-252396 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-252396 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.339444484s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-252396 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-252396 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 80 (14m6.419601986s)

                                                
                                                
-- stdout --
	* [cert-expiration-252396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-252396" primary control-plane node in "cert-expiration-252396" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.a735cf10 has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50237891s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.339292385s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000266317s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00062019s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.766507ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.823340032s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.766507ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.823340032s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-252396 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false" : exit status 80
cert_options_test.go:138: *** TestCertExpiration FAILED at 2025-10-01 19:05:48.478017713 +0000 UTC m=+4716.663042290
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestCertExpiration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-252396 -n cert-expiration-252396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-252396 -n cert-expiration-252396: exit status 2 (236.485431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-252396 logs -n 25
helpers_test.go:260: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-371776 sudo iptables -t nat -L -n -v                                 │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl status kubelet --all --full --no-pager         │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl cat kubelet --no-pager                         │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl status docker --all --full --no-pager          │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │                     │
	│ ssh     │ -p bridge-371776 sudo systemctl cat docker --no-pager                          │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /etc/docker/daemon.json                              │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo docker system info                                       │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │                     │
	│ ssh     │ -p bridge-371776 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │                     │
	│ ssh     │ -p bridge-371776 sudo systemctl cat cri-docker --no-pager                      │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │                     │
	│ ssh     │ -p bridge-371776 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cri-dockerd --version                                    │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl status containerd --all --full --no-pager      │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │                     │
	│ ssh     │ -p bridge-371776 sudo systemctl cat containerd --no-pager                      │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /lib/systemd/system/containerd.service               │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo cat /etc/containerd/config.toml                          │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo containerd config dump                                   │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl status crio --all --full --no-pager            │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo systemctl cat crio --no-pager                            │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ ssh     │ -p bridge-371776 sudo crio config                                              │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	│ delete  │ -p bridge-371776                                                               │ bridge-371776 │ jenkins │ v1.37.0 │ 01 Oct 25 19:00 UTC │ 01 Oct 25 19:00 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:58:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:58:26.612108   67540 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:58:26.612362   67540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:58:26.612372   67540 out.go:374] Setting ErrFile to fd 2...
	I1001 18:58:26.612377   67540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:58:26.612607   67540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:58:26.613088   67540 out.go:368] Setting JSON to false
	I1001 18:58:26.614224   67540 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6051,"bootTime":1759339056,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:58:26.614333   67540 start.go:140] virtualization: kvm guest
	I1001 18:58:26.616290   67540 out.go:179] * [bridge-371776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:58:26.617413   67540 notify.go:220] Checking for updates...
	I1001 18:58:26.617480   67540 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:58:26.618784   67540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:58:26.620096   67540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:58:26.621297   67540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:58:26.623306   67540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:58:26.624558   67540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:58:26.626012   67540 config.go:182] Loaded profile config "cert-expiration-252396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:26.626093   67540 config.go:182] Loaded profile config "enable-default-cni-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:26.626167   67540 config.go:182] Loaded profile config "flannel-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:26.626268   67540 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:58:26.665889   67540 out.go:179] * Using the kvm2 driver based on user configuration
	I1001 18:58:26.667407   67540 start.go:304] selected driver: kvm2
	I1001 18:58:26.667424   67540 start.go:921] validating driver "kvm2" against <nil>
	I1001 18:58:26.667449   67540 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:58:26.668173   67540 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:58:26.668266   67540 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:58:26.682910   67540 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:58:26.682944   67540 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:58:26.698601   67540 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:58:26.698658   67540 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 18:58:26.698974   67540 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:58:26.699032   67540 cni.go:84] Creating CNI manager for "bridge"
	I1001 18:58:26.699047   67540 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 18:58:26.699137   67540 start.go:348] cluster config:
	{Name:bridge-371776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-371776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I1001 18:58:26.699299   67540 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:58:26.704323   67540 out.go:179] * Starting "bridge-371776" primary control-plane node in "bridge-371776" cluster
	I1001 18:58:26.705336   67540 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:58:26.705382   67540 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:58:26.705400   67540 cache.go:58] Caching tarball of preloaded images
	I1001 18:58:26.705505   67540 preload.go:233] Found /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 18:58:26.705516   67540 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 18:58:26.705623   67540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/config.json ...
	I1001 18:58:26.705644   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/config.json: {Name:mk24621f47a5f71891c1372a0e60af414b04b0a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:26.705779   67540 start.go:360] acquireMachinesLock for bridge-371776: {Name:mk9cde4a6dd309a36e894aa2ddacad5574ffdbe7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:58:26.705805   67540 start.go:364] duration metric: took 14.669µs to acquireMachinesLock for "bridge-371776"
	I1001 18:58:26.705819   67540 start.go:93] Provisioning new machine with config: &{Name:bridge-371776 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:bridge-371776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:58:26.705877   67540 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 18:58:25.158309   65598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:58:25.658565   65598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:58:26.158651   65598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:58:26.658152   65598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:58:27.158671   65598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:58:27.263912   65598 kubeadm.go:1105] duration metric: took 3.912810844s to wait for elevateKubeSystemPrivileges
	I1001 18:58:27.263947   65598 kubeadm.go:394] duration metric: took 15.75894339s to StartCluster
	I1001 18:58:27.263968   65598 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:27.264061   65598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:58:27.265306   65598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:27.265587   65598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 18:58:27.265607   65598 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:58:27.265687   65598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:58:27.265808   65598 addons.go:69] Setting storage-provisioner=true in profile "flannel-371776"
	I1001 18:58:27.265831   65598 addons.go:238] Setting addon storage-provisioner=true in "flannel-371776"
	I1001 18:58:27.265864   65598 config.go:182] Loaded profile config "flannel-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:27.265878   65598 addons.go:69] Setting default-storageclass=true in profile "flannel-371776"
	I1001 18:58:27.265895   65598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-371776"
	I1001 18:58:27.265868   65598 host.go:66] Checking if "flannel-371776" exists ...
	I1001 18:58:27.266377   65598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:27.266420   65598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:27.266621   65598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:27.266657   65598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:27.268114   65598 out.go:179] * Verifying Kubernetes components...
	I1001 18:58:27.269346   65598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:58:27.281140   65598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I1001 18:58:27.281621   65598 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:27.282199   65598 main.go:141] libmachine: Using API Version  1
	I1001 18:58:27.282221   65598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:27.282617   65598 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:27.283127   65598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:27.283174   65598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:27.285004   65598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41903
	I1001 18:58:27.285448   65598 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:27.285967   65598 main.go:141] libmachine: Using API Version  1
	I1001 18:58:27.286027   65598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:27.286479   65598 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:27.286844   65598 main.go:141] libmachine: (flannel-371776) Calling .GetState
	I1001 18:58:27.292325   65598 addons.go:238] Setting addon default-storageclass=true in "flannel-371776"
	I1001 18:58:27.292387   65598 host.go:66] Checking if "flannel-371776" exists ...
	I1001 18:58:27.292825   65598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:27.292873   65598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:27.298910   65598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I1001 18:58:27.299563   65598 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:27.300111   65598 main.go:141] libmachine: Using API Version  1
	I1001 18:58:27.300136   65598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:27.300525   65598 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:27.300724   65598 main.go:141] libmachine: (flannel-371776) Calling .GetState
	I1001 18:58:27.302979   65598 main.go:141] libmachine: (flannel-371776) Calling .DriverName
	I1001 18:58:27.305256   65598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:58:27.306503   65598 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:58:27.306523   65598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:58:27.306562   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHHostname
	I1001 18:58:27.310676   65598 main.go:141] libmachine: (flannel-371776) DBG | domain flannel-371776 has defined MAC address 52:54:00:ff:26:85 in network mk-flannel-371776
	I1001 18:58:27.311283   65598 main.go:141] libmachine: (flannel-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:26:85", ip: ""} in network mk-flannel-371776: {Iface:virbr1 ExpiryTime:2025-10-01 19:57:59 +0000 UTC Type:0 Mac:52:54:00:ff:26:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:flannel-371776 Clientid:01:52:54:00:ff:26:85}
	I1001 18:58:27.311355   65598 main.go:141] libmachine: (flannel-371776) DBG | domain flannel-371776 has defined IP address 192.168.39.58 and MAC address 52:54:00:ff:26:85 in network mk-flannel-371776
	I1001 18:58:27.311521   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHPort
	I1001 18:58:27.311737   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHKeyPath
	I1001 18:58:27.311925   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHUsername
	I1001 18:58:27.312074   65598 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/flannel-371776/id_rsa Username:docker}
	I1001 18:58:27.312331   65598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I1001 18:58:27.312900   65598 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:27.313508   65598 main.go:141] libmachine: Using API Version  1
	I1001 18:58:27.313536   65598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:27.313938   65598 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:27.314522   65598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:27.314568   65598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:27.331020   65598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36171
	I1001 18:58:27.331637   65598 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:27.333743   65598 main.go:141] libmachine: Using API Version  1
	I1001 18:58:27.333791   65598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:27.334228   65598 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:27.334488   65598 main.go:141] libmachine: (flannel-371776) Calling .GetState
	I1001 18:58:27.336834   65598 main.go:141] libmachine: (flannel-371776) Calling .DriverName
	I1001 18:58:27.337081   65598 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:58:27.337104   65598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:58:27.337126   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHHostname
	I1001 18:58:27.341002   65598 main.go:141] libmachine: (flannel-371776) DBG | domain flannel-371776 has defined MAC address 52:54:00:ff:26:85 in network mk-flannel-371776
	I1001 18:58:27.341557   65598 main.go:141] libmachine: (flannel-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:26:85", ip: ""} in network mk-flannel-371776: {Iface:virbr1 ExpiryTime:2025-10-01 19:57:59 +0000 UTC Type:0 Mac:52:54:00:ff:26:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:flannel-371776 Clientid:01:52:54:00:ff:26:85}
	I1001 18:58:27.341586   65598 main.go:141] libmachine: (flannel-371776) DBG | domain flannel-371776 has defined IP address 192.168.39.58 and MAC address 52:54:00:ff:26:85 in network mk-flannel-371776
	I1001 18:58:27.342027   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHPort
	I1001 18:58:27.342287   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHKeyPath
	I1001 18:58:27.342476   65598 main.go:141] libmachine: (flannel-371776) Calling .GetSSHUsername
	I1001 18:58:27.342962   65598 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/flannel-371776/id_rsa Username:docker}
	I1001 18:58:27.648623   65598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 18:58:27.673027   65598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:58:27.897189   65598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:58:28.060783   65598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:58:28.145930   65598 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 18:58:28.146251   65598 node_ready.go:35] waiting up to 15m0s for node "flannel-371776" to be "Ready" ...
	I1001 18:58:28.163064   65598 main.go:141] libmachine: Making call to close driver server
	I1001 18:58:28.163086   65598 main.go:141] libmachine: (flannel-371776) Calling .Close
	I1001 18:58:28.163416   65598 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:58:28.163452   65598 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:58:28.163452   65598 main.go:141] libmachine: (flannel-371776) DBG | Closing plugin on server side
	I1001 18:58:28.163463   65598 main.go:141] libmachine: Making call to close driver server
	I1001 18:58:28.163471   65598 main.go:141] libmachine: (flannel-371776) Calling .Close
	I1001 18:58:28.163715   65598 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:58:28.163738   65598 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:58:28.163745   65598 main.go:141] libmachine: (flannel-371776) DBG | Closing plugin on server side
	I1001 18:58:28.178948   65598 main.go:141] libmachine: Making call to close driver server
	I1001 18:58:28.178975   65598 main.go:141] libmachine: (flannel-371776) Calling .Close
	I1001 18:58:28.179249   65598 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:58:28.179266   65598 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:58:28.553098   65598 main.go:141] libmachine: Making call to close driver server
	I1001 18:58:28.553131   65598 main.go:141] libmachine: (flannel-371776) Calling .Close
	I1001 18:58:28.553555   65598 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:58:28.553576   65598 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:58:28.553585   65598 main.go:141] libmachine: Making call to close driver server
	I1001 18:58:28.553593   65598 main.go:141] libmachine: (flannel-371776) Calling .Close
	I1001 18:58:28.553837   65598 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:58:28.553855   65598 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:58:28.556469   65598 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1001 18:58:28.557666   65598 addons.go:514] duration metric: took 1.291985826s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 18:58:28.651439   65598 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-371776" context rescaled to 1 replicas
	I1001 18:58:26.707504   67540 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 18:58:26.707697   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:58:26.707763   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:58:26.721449   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I1001 18:58:26.721924   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:58:26.722408   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:58:26.722446   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:58:26.722870   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:58:26.723069   67540 main.go:141] libmachine: (bridge-371776) Calling .GetMachineName
	I1001 18:58:26.723237   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:26.723408   67540 start.go:159] libmachine.API.Create for "bridge-371776" (driver="kvm2")
	I1001 18:58:26.723451   67540 client.go:168] LocalClient.Create starting
	I1001 18:58:26.723488   67540 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem
	I1001 18:58:26.723522   67540 main.go:141] libmachine: Decoding PEM data...
	I1001 18:58:26.723540   67540 main.go:141] libmachine: Parsing certificate...
	I1001 18:58:26.723616   67540 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem
	I1001 18:58:26.723645   67540 main.go:141] libmachine: Decoding PEM data...
	I1001 18:58:26.723663   67540 main.go:141] libmachine: Parsing certificate...
	I1001 18:58:26.723693   67540 main.go:141] libmachine: Running pre-create checks...
	I1001 18:58:26.723705   67540 main.go:141] libmachine: (bridge-371776) Calling .PreCreateCheck
	I1001 18:58:26.724066   67540 main.go:141] libmachine: (bridge-371776) Calling .GetConfigRaw
	I1001 18:58:26.724496   67540 main.go:141] libmachine: Creating machine...
	I1001 18:58:26.724511   67540 main.go:141] libmachine: (bridge-371776) Calling .Create
	I1001 18:58:26.724654   67540 main.go:141] libmachine: (bridge-371776) creating domain...
	I1001 18:58:26.724674   67540 main.go:141] libmachine: (bridge-371776) creating network...
	I1001 18:58:26.726286   67540 main.go:141] libmachine: (bridge-371776) DBG | found existing default network
	I1001 18:58:26.726491   67540 main.go:141] libmachine: (bridge-371776) DBG | <network connections='3'>
	I1001 18:58:26.726509   67540 main.go:141] libmachine: (bridge-371776) DBG |   <name>default</name>
	I1001 18:58:26.726521   67540 main.go:141] libmachine: (bridge-371776) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1001 18:58:26.726541   67540 main.go:141] libmachine: (bridge-371776) DBG |   <forward mode='nat'>
	I1001 18:58:26.726549   67540 main.go:141] libmachine: (bridge-371776) DBG |     <nat>
	I1001 18:58:26.726556   67540 main.go:141] libmachine: (bridge-371776) DBG |       <port start='1024' end='65535'/>
	I1001 18:58:26.726565   67540 main.go:141] libmachine: (bridge-371776) DBG |     </nat>
	I1001 18:58:26.726572   67540 main.go:141] libmachine: (bridge-371776) DBG |   </forward>
	I1001 18:58:26.726580   67540 main.go:141] libmachine: (bridge-371776) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1001 18:58:26.726609   67540 main.go:141] libmachine: (bridge-371776) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1001 18:58:26.726618   67540 main.go:141] libmachine: (bridge-371776) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1001 18:58:26.726629   67540 main.go:141] libmachine: (bridge-371776) DBG |     <dhcp>
	I1001 18:58:26.726643   67540 main.go:141] libmachine: (bridge-371776) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1001 18:58:26.726660   67540 main.go:141] libmachine: (bridge-371776) DBG |     </dhcp>
	I1001 18:58:26.726671   67540 main.go:141] libmachine: (bridge-371776) DBG |   </ip>
	I1001 18:58:26.726678   67540 main.go:141] libmachine: (bridge-371776) DBG | </network>
	I1001 18:58:26.726688   67540 main.go:141] libmachine: (bridge-371776) DBG | 
	I1001 18:58:26.727860   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:26.727676   67585 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4a:cb:87} reservation:<nil>}
	I1001 18:58:26.728567   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:26.728464   67585 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:07:17} reservation:<nil>}
	I1001 18:58:26.729255   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:26.729151   67585 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:ad:65} reservation:<nil>}
	I1001 18:58:26.730207   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:26.730117   67585 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025b2d0}
	I1001 18:58:26.730233   67540 main.go:141] libmachine: (bridge-371776) DBG | defining private network:
	I1001 18:58:26.730247   67540 main.go:141] libmachine: (bridge-371776) DBG | 
	I1001 18:58:26.730258   67540 main.go:141] libmachine: (bridge-371776) DBG | <network>
	I1001 18:58:26.730267   67540 main.go:141] libmachine: (bridge-371776) DBG |   <name>mk-bridge-371776</name>
	I1001 18:58:26.730274   67540 main.go:141] libmachine: (bridge-371776) DBG |   <dns enable='no'/>
	I1001 18:58:26.730284   67540 main.go:141] libmachine: (bridge-371776) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 18:58:26.730292   67540 main.go:141] libmachine: (bridge-371776) DBG |     <dhcp>
	I1001 18:58:26.730310   67540 main.go:141] libmachine: (bridge-371776) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 18:58:26.730337   67540 main.go:141] libmachine: (bridge-371776) DBG |     </dhcp>
	I1001 18:58:26.730351   67540 main.go:141] libmachine: (bridge-371776) DBG |   </ip>
	I1001 18:58:26.730373   67540 main.go:141] libmachine: (bridge-371776) DBG | </network>
	I1001 18:58:26.730391   67540 main.go:141] libmachine: (bridge-371776) DBG | 
	I1001 18:58:26.736037   67540 main.go:141] libmachine: (bridge-371776) DBG | creating private network mk-bridge-371776 192.168.72.0/24...
	I1001 18:58:26.812056   67540 main.go:141] libmachine: (bridge-371776) DBG | private network mk-bridge-371776 192.168.72.0/24 created
	I1001 18:58:26.812376   67540 main.go:141] libmachine: (bridge-371776) DBG | <network>
	I1001 18:58:26.812407   67540 main.go:141] libmachine: (bridge-371776) setting up store path in /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776 ...
	I1001 18:58:26.812416   67540 main.go:141] libmachine: (bridge-371776) DBG |   <name>mk-bridge-371776</name>
	I1001 18:58:26.812448   67540 main.go:141] libmachine: (bridge-371776) DBG |   <uuid>95141ef8-7766-4a5e-b132-3ed258c151ee</uuid>
	I1001 18:58:26.812469   67540 main.go:141] libmachine: (bridge-371776) building disk image from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1001 18:58:26.812479   67540 main.go:141] libmachine: (bridge-371776) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1001 18:58:26.812495   67540 main.go:141] libmachine: (bridge-371776) DBG |   <mac address='52:54:00:ce:be:e1'/>
	I1001 18:58:26.812508   67540 main.go:141] libmachine: (bridge-371776) DBG |   <dns enable='no'/>
	I1001 18:58:26.812518   67540 main.go:141] libmachine: (bridge-371776) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 18:58:26.812530   67540 main.go:141] libmachine: (bridge-371776) DBG |     <dhcp>
	I1001 18:58:26.812542   67540 main.go:141] libmachine: (bridge-371776) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 18:58:26.812554   67540 main.go:141] libmachine: (bridge-371776) DBG |     </dhcp>
	I1001 18:58:26.812572   67540 main.go:141] libmachine: (bridge-371776) Downloading /home/jenkins/minikube-integration/21631-9542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1001 18:58:26.812609   67540 main.go:141] libmachine: (bridge-371776) DBG |   </ip>
	I1001 18:58:26.812629   67540 main.go:141] libmachine: (bridge-371776) DBG | </network>
	I1001 18:58:26.812641   67540 main.go:141] libmachine: (bridge-371776) DBG | 
	I1001 18:58:26.812662   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:26.812358   67585 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:58:27.072532   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:27.072410   67585 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa...
	I1001 18:58:27.852394   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:27.852214   67585 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/bridge-371776.rawdisk...
	I1001 18:58:27.852444   67540 main.go:141] libmachine: (bridge-371776) DBG | Writing magic tar header
	I1001 18:58:27.852487   67540 main.go:141] libmachine: (bridge-371776) DBG | Writing SSH key tar header
	I1001 18:58:27.852523   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:27.852321   67585 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776 ...
	I1001 18:58:27.852543   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776 (perms=drwx------)
	I1001 18:58:27.852554   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776
	I1001 18:58:27.852572   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines
	I1001 18:58:27.852581   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:58:27.852595   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542
	I1001 18:58:27.852614   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1001 18:58:27.852627   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines (perms=drwxr-xr-x)
	I1001 18:58:27.852641   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube (perms=drwxr-xr-x)
	I1001 18:58:27.852654   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins/minikube-integration/21631-9542 (perms=drwxrwxr-x)
	I1001 18:58:27.852679   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 18:58:27.852691   67540 main.go:141] libmachine: (bridge-371776) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 18:58:27.852699   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home/jenkins
	I1001 18:58:27.852709   67540 main.go:141] libmachine: (bridge-371776) DBG | checking permissions on dir: /home
	I1001 18:58:27.852715   67540 main.go:141] libmachine: (bridge-371776) defining domain...
	I1001 18:58:27.852731   67540 main.go:141] libmachine: (bridge-371776) DBG | skipping /home - not owner
	I1001 18:58:27.853803   67540 main.go:141] libmachine: (bridge-371776) defining domain using XML: 
	I1001 18:58:27.853826   67540 main.go:141] libmachine: (bridge-371776) <domain type='kvm'>
	I1001 18:58:27.853836   67540 main.go:141] libmachine: (bridge-371776)   <name>bridge-371776</name>
	I1001 18:58:27.853846   67540 main.go:141] libmachine: (bridge-371776)   <memory unit='MiB'>3072</memory>
	I1001 18:58:27.853855   67540 main.go:141] libmachine: (bridge-371776)   <vcpu>2</vcpu>
	I1001 18:58:27.853861   67540 main.go:141] libmachine: (bridge-371776)   <features>
	I1001 18:58:27.853873   67540 main.go:141] libmachine: (bridge-371776)     <acpi/>
	I1001 18:58:27.853881   67540 main.go:141] libmachine: (bridge-371776)     <apic/>
	I1001 18:58:27.853889   67540 main.go:141] libmachine: (bridge-371776)     <pae/>
	I1001 18:58:27.853898   67540 main.go:141] libmachine: (bridge-371776)   </features>
	I1001 18:58:27.853909   67540 main.go:141] libmachine: (bridge-371776)   <cpu mode='host-passthrough'>
	I1001 18:58:27.853917   67540 main.go:141] libmachine: (bridge-371776)   </cpu>
	I1001 18:58:27.853925   67540 main.go:141] libmachine: (bridge-371776)   <os>
	I1001 18:58:27.853949   67540 main.go:141] libmachine: (bridge-371776)     <type>hvm</type>
	I1001 18:58:27.853963   67540 main.go:141] libmachine: (bridge-371776)     <boot dev='cdrom'/>
	I1001 18:58:27.854017   67540 main.go:141] libmachine: (bridge-371776)     <boot dev='hd'/>
	I1001 18:58:27.854044   67540 main.go:141] libmachine: (bridge-371776)     <bootmenu enable='no'/>
	I1001 18:58:27.854083   67540 main.go:141] libmachine: (bridge-371776)   </os>
	I1001 18:58:27.854105   67540 main.go:141] libmachine: (bridge-371776)   <devices>
	I1001 18:58:27.854119   67540 main.go:141] libmachine: (bridge-371776)     <disk type='file' device='cdrom'>
	I1001 18:58:27.854134   67540 main.go:141] libmachine: (bridge-371776)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/boot2docker.iso'/>
	I1001 18:58:27.854146   67540 main.go:141] libmachine: (bridge-371776)       <target dev='hdc' bus='scsi'/>
	I1001 18:58:27.854153   67540 main.go:141] libmachine: (bridge-371776)       <readonly/>
	I1001 18:58:27.854164   67540 main.go:141] libmachine: (bridge-371776)     </disk>
	I1001 18:58:27.854171   67540 main.go:141] libmachine: (bridge-371776)     <disk type='file' device='disk'>
	I1001 18:58:27.854185   67540 main.go:141] libmachine: (bridge-371776)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 18:58:27.854199   67540 main.go:141] libmachine: (bridge-371776)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/bridge-371776.rawdisk'/>
	I1001 18:58:27.854210   67540 main.go:141] libmachine: (bridge-371776)       <target dev='hda' bus='virtio'/>
	I1001 18:58:27.854223   67540 main.go:141] libmachine: (bridge-371776)     </disk>
	I1001 18:58:27.854246   67540 main.go:141] libmachine: (bridge-371776)     <interface type='network'>
	I1001 18:58:27.854266   67540 main.go:141] libmachine: (bridge-371776)       <source network='mk-bridge-371776'/>
	I1001 18:58:27.854279   67540 main.go:141] libmachine: (bridge-371776)       <model type='virtio'/>
	I1001 18:58:27.854289   67540 main.go:141] libmachine: (bridge-371776)     </interface>
	I1001 18:58:27.854332   67540 main.go:141] libmachine: (bridge-371776)     <interface type='network'>
	I1001 18:58:27.854356   67540 main.go:141] libmachine: (bridge-371776)       <source network='default'/>
	I1001 18:58:27.854372   67540 main.go:141] libmachine: (bridge-371776)       <model type='virtio'/>
	I1001 18:58:27.854383   67540 main.go:141] libmachine: (bridge-371776)     </interface>
	I1001 18:58:27.854396   67540 main.go:141] libmachine: (bridge-371776)     <serial type='pty'>
	I1001 18:58:27.854410   67540 main.go:141] libmachine: (bridge-371776)       <target port='0'/>
	I1001 18:58:27.854421   67540 main.go:141] libmachine: (bridge-371776)     </serial>
	I1001 18:58:27.854449   67540 main.go:141] libmachine: (bridge-371776)     <console type='pty'>
	I1001 18:58:27.854465   67540 main.go:141] libmachine: (bridge-371776)       <target type='serial' port='0'/>
	I1001 18:58:27.854473   67540 main.go:141] libmachine: (bridge-371776)     </console>
	I1001 18:58:27.854487   67540 main.go:141] libmachine: (bridge-371776)     <rng model='virtio'>
	I1001 18:58:27.854499   67540 main.go:141] libmachine: (bridge-371776)       <backend model='random'>/dev/random</backend>
	I1001 18:58:27.854509   67540 main.go:141] libmachine: (bridge-371776)     </rng>
	I1001 18:58:27.854520   67540 main.go:141] libmachine: (bridge-371776)   </devices>
	I1001 18:58:27.854531   67540 main.go:141] libmachine: (bridge-371776) </domain>
	I1001 18:58:27.854585   67540 main.go:141] libmachine: (bridge-371776) 
	I1001 18:58:27.859249   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:18:31:04 in network default
	I1001 18:58:27.859857   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:27.859874   67540 main.go:141] libmachine: (bridge-371776) starting domain...
	I1001 18:58:27.859886   67540 main.go:141] libmachine: (bridge-371776) ensuring networks are active...
	I1001 18:58:27.860731   67540 main.go:141] libmachine: (bridge-371776) Ensuring network default is active
	I1001 18:58:27.861120   67540 main.go:141] libmachine: (bridge-371776) Ensuring network mk-bridge-371776 is active
	I1001 18:58:27.861812   67540 main.go:141] libmachine: (bridge-371776) getting domain XML...
	I1001 18:58:27.863064   67540 main.go:141] libmachine: (bridge-371776) DBG | starting domain XML:
	I1001 18:58:27.863080   67540 main.go:141] libmachine: (bridge-371776) DBG | <domain type='kvm'>
	I1001 18:58:27.863090   67540 main.go:141] libmachine: (bridge-371776) DBG |   <name>bridge-371776</name>
	I1001 18:58:27.863101   67540 main.go:141] libmachine: (bridge-371776) DBG |   <uuid>7056d237-a321-45f1-b9e3-de17659c9166</uuid>
	I1001 18:58:27.863111   67540 main.go:141] libmachine: (bridge-371776) DBG |   <memory unit='KiB'>3145728</memory>
	I1001 18:58:27.863130   67540 main.go:141] libmachine: (bridge-371776) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1001 18:58:27.863170   67540 main.go:141] libmachine: (bridge-371776) DBG |   <vcpu placement='static'>2</vcpu>
	I1001 18:58:27.863193   67540 main.go:141] libmachine: (bridge-371776) DBG |   <os>
	I1001 18:58:27.863226   67540 main.go:141] libmachine: (bridge-371776) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1001 18:58:27.863259   67540 main.go:141] libmachine: (bridge-371776) DBG |     <boot dev='cdrom'/>
	I1001 18:58:27.863273   67540 main.go:141] libmachine: (bridge-371776) DBG |     <boot dev='hd'/>
	I1001 18:58:27.863286   67540 main.go:141] libmachine: (bridge-371776) DBG |     <bootmenu enable='no'/>
	I1001 18:58:27.863294   67540 main.go:141] libmachine: (bridge-371776) DBG |   </os>
	I1001 18:58:27.863300   67540 main.go:141] libmachine: (bridge-371776) DBG |   <features>
	I1001 18:58:27.863308   67540 main.go:141] libmachine: (bridge-371776) DBG |     <acpi/>
	I1001 18:58:27.863316   67540 main.go:141] libmachine: (bridge-371776) DBG |     <apic/>
	I1001 18:58:27.863328   67540 main.go:141] libmachine: (bridge-371776) DBG |     <pae/>
	I1001 18:58:27.863339   67540 main.go:141] libmachine: (bridge-371776) DBG |   </features>
	I1001 18:58:27.863358   67540 main.go:141] libmachine: (bridge-371776) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1001 18:58:27.863370   67540 main.go:141] libmachine: (bridge-371776) DBG |   <clock offset='utc'/>
	I1001 18:58:27.863394   67540 main.go:141] libmachine: (bridge-371776) DBG |   <on_poweroff>destroy</on_poweroff>
	I1001 18:58:27.863413   67540 main.go:141] libmachine: (bridge-371776) DBG |   <on_reboot>restart</on_reboot>
	I1001 18:58:27.863459   67540 main.go:141] libmachine: (bridge-371776) DBG |   <on_crash>destroy</on_crash>
	I1001 18:58:27.863474   67540 main.go:141] libmachine: (bridge-371776) DBG |   <devices>
	I1001 18:58:27.863488   67540 main.go:141] libmachine: (bridge-371776) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1001 18:58:27.863499   67540 main.go:141] libmachine: (bridge-371776) DBG |     <disk type='file' device='cdrom'>
	I1001 18:58:27.863509   67540 main.go:141] libmachine: (bridge-371776) DBG |       <driver name='qemu' type='raw'/>
	I1001 18:58:27.863535   67540 main.go:141] libmachine: (bridge-371776) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/boot2docker.iso'/>
	I1001 18:58:27.863551   67540 main.go:141] libmachine: (bridge-371776) DBG |       <target dev='hdc' bus='scsi'/>
	I1001 18:58:27.863563   67540 main.go:141] libmachine: (bridge-371776) DBG |       <readonly/>
	I1001 18:58:27.863574   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1001 18:58:27.863582   67540 main.go:141] libmachine: (bridge-371776) DBG |     </disk>
	I1001 18:58:27.863593   67540 main.go:141] libmachine: (bridge-371776) DBG |     <disk type='file' device='disk'>
	I1001 18:58:27.863606   67540 main.go:141] libmachine: (bridge-371776) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1001 18:58:27.863623   67540 main.go:141] libmachine: (bridge-371776) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/bridge-371776.rawdisk'/>
	I1001 18:58:27.863636   67540 main.go:141] libmachine: (bridge-371776) DBG |       <target dev='hda' bus='virtio'/>
	I1001 18:58:27.863645   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1001 18:58:27.863654   67540 main.go:141] libmachine: (bridge-371776) DBG |     </disk>
	I1001 18:58:27.863668   67540 main.go:141] libmachine: (bridge-371776) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1001 18:58:27.863684   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1001 18:58:27.863701   67540 main.go:141] libmachine: (bridge-371776) DBG |     </controller>
	I1001 18:58:27.863716   67540 main.go:141] libmachine: (bridge-371776) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1001 18:58:27.863729   67540 main.go:141] libmachine: (bridge-371776) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1001 18:58:27.863740   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1001 18:58:27.863750   67540 main.go:141] libmachine: (bridge-371776) DBG |     </controller>
	I1001 18:58:27.863758   67540 main.go:141] libmachine: (bridge-371776) DBG |     <interface type='network'>
	I1001 18:58:27.863766   67540 main.go:141] libmachine: (bridge-371776) DBG |       <mac address='52:54:00:5d:ca:2f'/>
	I1001 18:58:27.863789   67540 main.go:141] libmachine: (bridge-371776) DBG |       <source network='mk-bridge-371776'/>
	I1001 18:58:27.863809   67540 main.go:141] libmachine: (bridge-371776) DBG |       <model type='virtio'/>
	I1001 18:58:27.863824   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1001 18:58:27.863851   67540 main.go:141] libmachine: (bridge-371776) DBG |     </interface>
	I1001 18:58:27.863865   67540 main.go:141] libmachine: (bridge-371776) DBG |     <interface type='network'>
	I1001 18:58:27.863874   67540 main.go:141] libmachine: (bridge-371776) DBG |       <mac address='52:54:00:18:31:04'/>
	I1001 18:58:27.863887   67540 main.go:141] libmachine: (bridge-371776) DBG |       <source network='default'/>
	I1001 18:58:27.863895   67540 main.go:141] libmachine: (bridge-371776) DBG |       <model type='virtio'/>
	I1001 18:58:27.863910   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1001 18:58:27.863925   67540 main.go:141] libmachine: (bridge-371776) DBG |     </interface>
	I1001 18:58:27.863954   67540 main.go:141] libmachine: (bridge-371776) DBG |     <serial type='pty'>
	I1001 18:58:27.863984   67540 main.go:141] libmachine: (bridge-371776) DBG |       <target type='isa-serial' port='0'>
	I1001 18:58:27.864016   67540 main.go:141] libmachine: (bridge-371776) DBG |         <model name='isa-serial'/>
	I1001 18:58:27.864038   67540 main.go:141] libmachine: (bridge-371776) DBG |       </target>
	I1001 18:58:27.864051   67540 main.go:141] libmachine: (bridge-371776) DBG |     </serial>
	I1001 18:58:27.864063   67540 main.go:141] libmachine: (bridge-371776) DBG |     <console type='pty'>
	I1001 18:58:27.864073   67540 main.go:141] libmachine: (bridge-371776) DBG |       <target type='serial' port='0'/>
	I1001 18:58:27.864084   67540 main.go:141] libmachine: (bridge-371776) DBG |     </console>
	I1001 18:58:27.864098   67540 main.go:141] libmachine: (bridge-371776) DBG |     <input type='mouse' bus='ps2'/>
	I1001 18:58:27.864111   67540 main.go:141] libmachine: (bridge-371776) DBG |     <input type='keyboard' bus='ps2'/>
	I1001 18:58:27.864127   67540 main.go:141] libmachine: (bridge-371776) DBG |     <audio id='1' type='none'/>
	I1001 18:58:27.864160   67540 main.go:141] libmachine: (bridge-371776) DBG |     <memballoon model='virtio'>
	I1001 18:58:27.864176   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1001 18:58:27.864186   67540 main.go:141] libmachine: (bridge-371776) DBG |     </memballoon>
	I1001 18:58:27.864196   67540 main.go:141] libmachine: (bridge-371776) DBG |     <rng model='virtio'>
	I1001 18:58:27.864209   67540 main.go:141] libmachine: (bridge-371776) DBG |       <backend model='random'>/dev/random</backend>
	I1001 18:58:27.864219   67540 main.go:141] libmachine: (bridge-371776) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1001 18:58:27.864228   67540 main.go:141] libmachine: (bridge-371776) DBG |     </rng>
	I1001 18:58:27.864236   67540 main.go:141] libmachine: (bridge-371776) DBG |   </devices>
	I1001 18:58:27.864245   67540 main.go:141] libmachine: (bridge-371776) DBG | </domain>
	I1001 18:58:27.864255   67540 main.go:141] libmachine: (bridge-371776) DBG | 
	I1001 18:58:29.347484   67540 main.go:141] libmachine: (bridge-371776) waiting for domain to start...
	I1001 18:58:29.348829   67540 main.go:141] libmachine: (bridge-371776) domain is now running
	I1001 18:58:29.348859   67540 main.go:141] libmachine: (bridge-371776) waiting for IP...
	I1001 18:58:29.349730   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:29.350385   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:29.350398   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:29.350775   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:29.350856   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:29.350792   67585 retry.go:31] will retry after 236.468096ms: waiting for domain to come up
	I1001 18:58:29.589271   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:29.590010   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:29.590054   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:29.590388   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:29.590417   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:29.590366   67585 retry.go:31] will retry after 325.054339ms: waiting for domain to come up
	I1001 18:58:29.916812   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:29.917487   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:29.917516   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:29.917925   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:29.917949   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:29.917887   67585 retry.go:31] will retry after 293.302602ms: waiting for domain to come up
	I1001 18:58:30.212601   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:30.213311   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:30.213336   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:30.213775   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:30.213814   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:30.213753   67585 retry.go:31] will retry after 523.369547ms: waiting for domain to come up
	I1001 18:58:30.738511   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:30.739244   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:30.739271   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:30.739666   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:30.739722   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:30.739655   67585 retry.go:31] will retry after 742.466432ms: waiting for domain to come up
	I1001 18:58:31.483714   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:31.484695   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:31.484733   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:31.485077   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:31.485101   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:31.485047   67585 retry.go:31] will retry after 857.974056ms: waiting for domain to come up
	W1001 18:58:30.150956   65598 node_ready.go:57] node "flannel-371776" has "Ready":"False" status (will retry)
	W1001 18:58:32.153213   65598 node_ready.go:57] node "flannel-371776" has "Ready":"False" status (will retry)
	W1001 18:58:34.157115   65598 node_ready.go:57] node "flannel-371776" has "Ready":"False" status (will retry)
	I1001 18:58:32.344612   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:32.345464   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:32.345500   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:32.345860   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:32.345888   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:32.345831   67585 retry.go:31] will retry after 921.824709ms: waiting for domain to come up
	I1001 18:58:33.269685   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:33.270345   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:33.270372   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:33.270744   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:33.270771   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:33.270697   67585 retry.go:31] will retry after 1.191671324s: waiting for domain to come up
	I1001 18:58:34.463464   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:34.464174   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:34.464206   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:34.464659   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:34.464685   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:34.464625   67585 retry.go:31] will retry after 1.749607174s: waiting for domain to come up
	I1001 18:58:36.215730   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:36.216408   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:36.216452   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:36.216820   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:36.216853   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:36.216793   67585 retry.go:31] will retry after 1.45969294s: waiting for domain to come up
	I1001 18:58:35.149855   65598 node_ready.go:49] node "flannel-371776" is "Ready"
	I1001 18:58:35.149892   65598 node_ready.go:38] duration metric: took 7.003597101s for node "flannel-371776" to be "Ready" ...
	I1001 18:58:35.149906   65598 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:58:35.149962   65598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:58:35.172825   65598 api_server.go:72] duration metric: took 7.907182296s to wait for apiserver process to appear ...
	I1001 18:58:35.172877   65598 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:58:35.172903   65598 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1001 18:58:35.178905   65598 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1001 18:58:35.179949   65598 api_server.go:141] control plane version: v1.34.1
	I1001 18:58:35.179972   65598 api_server.go:131] duration metric: took 7.087879ms to wait for apiserver health ...
	I1001 18:58:35.179980   65598 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:58:35.183607   65598 system_pods.go:59] 7 kube-system pods found
	I1001 18:58:35.183638   65598 system_pods.go:61] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:35.183644   65598 system_pods.go:61] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:35.183649   65598 system_pods.go:61] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:35.183653   65598 system_pods.go:61] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:35.183656   65598 system_pods.go:61] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:35.183665   65598 system_pods.go:61] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:35.183669   65598 system_pods.go:61] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:58:35.183676   65598 system_pods.go:74] duration metric: took 3.69006ms to wait for pod list to return data ...
	I1001 18:58:35.183687   65598 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:58:35.186085   65598 default_sa.go:45] found service account: "default"
	I1001 18:58:35.186107   65598 default_sa.go:55] duration metric: took 2.414466ms for default service account to be created ...
	I1001 18:58:35.186117   65598 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:58:35.188955   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:35.188984   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:35.188990   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:35.189004   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:35.189009   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:35.189015   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:35.189025   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:35.189032   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:58:35.189058   65598 retry.go:31] will retry after 307.148151ms: missing components: kube-dns
	I1001 18:58:35.501675   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:35.501716   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:35.501724   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:35.501732   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:35.501737   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:35.501743   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:35.501749   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:35.501756   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:58:35.501773   65598 retry.go:31] will retry after 358.099523ms: missing components: kube-dns
	I1001 18:58:35.865335   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:35.865373   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:35.865385   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:35.865395   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:35.865401   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:35.865408   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:35.865418   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:35.865423   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:35.865454   65598 retry.go:31] will retry after 464.849835ms: missing components: kube-dns
	I1001 18:58:36.336106   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:36.336142   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:36.336152   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:36.336161   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:36.336167   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:36.336173   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:36.336191   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:36.336200   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:36.336217   65598 retry.go:31] will retry after 560.520442ms: missing components: kube-dns
	I1001 18:58:36.903527   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:36.903567   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:36.903574   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:36.903583   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:36.903589   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:36.903595   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:36.903600   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:36.903605   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:36.903620   65598 retry.go:31] will retry after 548.037645ms: missing components: kube-dns
	I1001 18:58:37.456268   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:37.456301   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:37.456310   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:37.456318   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:37.456324   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:37.456329   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:37.456334   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:37.456339   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:37.456357   65598 retry.go:31] will retry after 673.776244ms: missing components: kube-dns
	I1001 18:58:38.134244   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:38.134276   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:38.134282   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:38.134288   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:38.134292   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:38.134298   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:38.134301   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:38.134314   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:38.134331   65598 retry.go:31] will retry after 726.422999ms: missing components: kube-dns
	I1001 18:58:38.866155   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:38.866190   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:38.866199   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:38.866207   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:38.866213   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:38.866218   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:38.866223   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:38.866228   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:38.866245   65598 retry.go:31] will retry after 1.388561267s: missing components: kube-dns
	I1001 18:58:37.678924   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:37.679796   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:37.679822   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:37.680224   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:37.680255   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:37.680193   67585 retry.go:31] will retry after 2.650051581s: waiting for domain to come up
	I1001 18:58:40.334077   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:40.334847   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:40.334875   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:40.335202   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:40.335275   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:40.335182   67585 retry.go:31] will retry after 3.483294891s: waiting for domain to come up
	I1001 18:58:40.260863   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:40.260899   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:40.260912   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:40.260921   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:40.260927   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:40.260933   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:40.260938   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:40.260943   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:40.260957   65598 retry.go:31] will retry after 1.278365847s: missing components: kube-dns
	I1001 18:58:41.545705   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:41.545744   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:41.545752   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:41.545761   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:41.545767   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:41.545772   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:41.545777   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:41.545782   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:41.545807   65598 retry.go:31] will retry after 1.941146523s: missing components: kube-dns
	I1001 18:58:43.491414   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:43.491468   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:43.491474   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:43.491481   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:43.491485   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:43.491489   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:43.491492   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:43.491495   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:43.491510   65598 retry.go:31] will retry after 2.396671951s: missing components: kube-dns
	I1001 18:58:43.820820   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:43.821445   67540 main.go:141] libmachine: (bridge-371776) DBG | no network interface addresses found for domain bridge-371776 (source=lease)
	I1001 18:58:43.821466   67540 main.go:141] libmachine: (bridge-371776) DBG | trying to list again with source=arp
	I1001 18:58:43.821828   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find current IP address of domain bridge-371776 in network mk-bridge-371776 (interfaces detected: [])
	I1001 18:58:43.821844   67540 main.go:141] libmachine: (bridge-371776) DBG | I1001 18:58:43.821824   67585 retry.go:31] will retry after 4.505507801s: waiting for domain to come up
	I1001 18:58:45.893677   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:45.893706   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:45.893712   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:45.893718   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:45.893722   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:45.893725   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:45.893728   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:45.893731   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:45.893744   65598 retry.go:31] will retry after 2.338487231s: missing components: kube-dns
	I1001 18:58:48.237384   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:48.237414   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:58:48.237419   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:48.237439   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:48.237444   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:48.237451   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:48.237454   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:48.237457   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:48.237474   65598 retry.go:31] will retry after 4.369616864s: missing components: kube-dns
	I1001 18:58:48.329969   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.330696   67540 main.go:141] libmachine: (bridge-371776) found domain IP: 192.168.72.202
	I1001 18:58:48.330724   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has current primary IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.330732   67540 main.go:141] libmachine: (bridge-371776) reserving static IP address...
	I1001 18:58:48.331157   67540 main.go:141] libmachine: (bridge-371776) DBG | unable to find host DHCP lease matching {name: "bridge-371776", mac: "52:54:00:5d:ca:2f", ip: "192.168.72.202"} in network mk-bridge-371776
	I1001 18:58:48.540120   67540 main.go:141] libmachine: (bridge-371776) DBG | Getting to WaitForSSH function...
	I1001 18:58:48.540152   67540 main.go:141] libmachine: (bridge-371776) reserved static IP address 192.168.72.202 for domain bridge-371776
	I1001 18:58:48.540163   67540 main.go:141] libmachine: (bridge-371776) waiting for SSH...
	I1001 18:58:48.542957   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.543420   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:48.543453   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.543643   67540 main.go:141] libmachine: (bridge-371776) DBG | Using SSH client type: external
	I1001 18:58:48.543670   67540 main.go:141] libmachine: (bridge-371776) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa (-rw-------)
	I1001 18:58:48.543722   67540 main.go:141] libmachine: (bridge-371776) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:58:48.543755   67540 main.go:141] libmachine: (bridge-371776) DBG | About to run SSH command:
	I1001 18:58:48.543769   67540 main.go:141] libmachine: (bridge-371776) DBG | exit 0
	I1001 18:58:48.674475   67540 main.go:141] libmachine: (bridge-371776) DBG | SSH cmd err, output: <nil>: 
	I1001 18:58:48.674807   67540 main.go:141] libmachine: (bridge-371776) domain creation complete
	I1001 18:58:48.675197   67540 main.go:141] libmachine: (bridge-371776) Calling .GetConfigRaw
	I1001 18:58:48.675962   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:48.676160   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:48.676341   67540 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 18:58:48.676355   67540 main.go:141] libmachine: (bridge-371776) Calling .GetState
	I1001 18:58:48.677725   67540 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 18:58:48.677741   67540 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 18:58:48.677748   67540 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 18:58:48.677756   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:48.680336   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.680727   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:48.680760   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.680948   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:48.681126   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.681289   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.681421   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:48.681611   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:48.681844   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:48.681867   67540 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 18:58:48.790007   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:58:48.790029   67540 main.go:141] libmachine: Detecting the provisioner...
	I1001 18:58:48.790037   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:48.793367   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.793846   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:48.793895   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.794041   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:48.794208   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.794378   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.794547   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:48.794740   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:48.794961   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:48.794972   67540 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 18:58:48.902518   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1001 18:58:48.902624   67540 main.go:141] libmachine: found compatible host: buildroot
	I1001 18:58:48.902640   67540 main.go:141] libmachine: Provisioning with buildroot...
	I1001 18:58:48.902653   67540 main.go:141] libmachine: (bridge-371776) Calling .GetMachineName
	I1001 18:58:48.902888   67540 buildroot.go:166] provisioning hostname "bridge-371776"
	I1001 18:58:48.902919   67540 main.go:141] libmachine: (bridge-371776) Calling .GetMachineName
	I1001 18:58:48.903066   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:48.906165   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.906576   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:48.906605   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:48.906894   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:48.907092   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.907242   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:48.907393   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:48.907545   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:48.907753   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:48.907772   67540 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-371776 && echo "bridge-371776" | sudo tee /etc/hostname
	I1001 18:58:49.035290   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-371776
	
	I1001 18:58:49.035317   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:49.038356   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.038862   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:49.038888   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.039052   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:49.039237   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:49.039396   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:49.039541   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:49.039754   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:49.039954   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:49.039970   67540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-371776' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-371776/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-371776' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:58:49.157929   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:58:49.157952   67540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 18:58:49.157997   67540 buildroot.go:174] setting up certificates
	I1001 18:58:49.158007   67540 provision.go:84] configureAuth start
	I1001 18:58:49.158016   67540 main.go:141] libmachine: (bridge-371776) Calling .GetMachineName
	I1001 18:58:49.158310   67540 main.go:141] libmachine: (bridge-371776) Calling .GetIP
	I1001 18:58:49.161326   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.161738   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:49.161779   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.161983   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:49.164392   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.164786   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:49.164815   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.165014   67540 provision.go:143] copyHostCerts
	I1001 18:58:49.165071   67540 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem, removing ...
	I1001 18:58:49.165089   67540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:58:49.165186   67540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 18:58:49.165316   67540 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem, removing ...
	I1001 18:58:49.165329   67540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:58:49.165395   67540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 18:58:49.165505   67540 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem, removing ...
	I1001 18:58:49.165514   67540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:58:49.165555   67540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 18:58:49.165646   67540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.bridge-371776 san=[127.0.0.1 192.168.72.202 bridge-371776 localhost minikube]
	I1001 18:58:49.586852   67540 provision.go:177] copyRemoteCerts
	I1001 18:58:49.586925   67540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:58:49.586953   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:49.590413   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.590965   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:49.590997   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.591240   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:49.591477   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:49.591666   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:49.591904   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:58:49.677768   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:58:49.707464   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 18:58:49.737115   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:58:49.771363   67540 provision.go:87] duration metric: took 613.343896ms to configureAuth
	I1001 18:58:49.771392   67540 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:58:49.771575   67540 config.go:182] Loaded profile config "bridge-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:49.771677   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:49.774692   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.775150   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:49.775170   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:49.775357   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:49.775603   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:49.775790   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:49.776016   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:49.776216   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:49.776523   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:49.776544   67540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:58:50.024675   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:58:50.024701   67540 main.go:141] libmachine: Checking connection to Docker...
	I1001 18:58:50.024758   67540 main.go:141] libmachine: (bridge-371776) Calling .GetURL
	I1001 18:58:50.025982   67540 main.go:141] libmachine: (bridge-371776) DBG | using libvirt version 8000000
	I1001 18:58:50.028775   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.029137   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.029167   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.029371   67540 main.go:141] libmachine: Docker is up and running!
	I1001 18:58:50.029387   67540 main.go:141] libmachine: Reticulating splines...
	I1001 18:58:50.029395   67540 client.go:171] duration metric: took 23.305933189s to LocalClient.Create
	I1001 18:58:50.029423   67540 start.go:167] duration metric: took 23.306016385s to libmachine.API.Create "bridge-371776"
	I1001 18:58:50.029449   67540 start.go:293] postStartSetup for "bridge-371776" (driver="kvm2")
	I1001 18:58:50.029462   67540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:58:50.029486   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:50.029740   67540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:58:50.029761   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:50.032228   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.032562   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.032591   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.032808   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:50.032979   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:50.033132   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:50.033273   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:58:50.120601   67540 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:58:50.125647   67540 info.go:137] Remote host: Buildroot 2025.02
	I1001 18:58:50.125673   67540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 18:58:50.125733   67540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 18:58:50.125833   67540 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> 134692.pem in /etc/ssl/certs
	I1001 18:58:50.125980   67540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 18:58:50.138387   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:58:50.167551   67540 start.go:296] duration metric: took 138.088245ms for postStartSetup
	I1001 18:58:50.167604   67540 main.go:141] libmachine: (bridge-371776) Calling .GetConfigRaw
	I1001 18:58:50.168276   67540 main.go:141] libmachine: (bridge-371776) Calling .GetIP
	I1001 18:58:50.171265   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.171679   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.171722   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.171941   67540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/config.json ...
	I1001 18:58:50.172151   67540 start.go:128] duration metric: took 23.466258396s to createHost
	I1001 18:58:50.172175   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:50.174722   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.175101   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.175136   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.175285   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:50.175474   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:50.175617   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:50.175794   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:50.175959   67540 main.go:141] libmachine: Using SSH client type: native
	I1001 18:58:50.176191   67540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I1001 18:58:50.176206   67540 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:58:50.284790   67540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759345130.253059775
	
	I1001 18:58:50.284818   67540 fix.go:216] guest clock: 1759345130.253059775
	I1001 18:58:50.284829   67540 fix.go:229] Guest: 2025-10-01 18:58:50.253059775 +0000 UTC Remote: 2025-10-01 18:58:50.17216314 +0000 UTC m=+23.599559242 (delta=80.896635ms)
	I1001 18:58:50.284865   67540 fix.go:200] guest clock delta is within tolerance: 80.896635ms
	I1001 18:58:50.284874   67540 start.go:83] releasing machines lock for "bridge-371776", held for 23.579061375s
	I1001 18:58:50.284912   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:50.285206   67540 main.go:141] libmachine: (bridge-371776) Calling .GetIP
	I1001 18:58:50.288577   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.288967   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.289001   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.289237   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:50.289827   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:50.290037   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:58:50.290161   67540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:58:50.290203   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:50.290298   67540 ssh_runner.go:195] Run: cat /version.json
	I1001 18:58:50.290324   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:58:50.293791   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.294083   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.294246   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.294283   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.294505   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:50.294693   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:50.294701   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:50.294770   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:50.294946   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:58:50.294953   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:50.295132   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:58:50.295140   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:58:50.295309   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:58:50.295466   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:58:50.412031   67540 ssh_runner.go:195] Run: systemctl --version
	I1001 18:58:50.418342   67540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:58:50.583623   67540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:58:50.592886   67540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:58:50.592965   67540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:58:50.618364   67540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:58:50.618394   67540 start.go:495] detecting cgroup driver to use...
	I1001 18:58:50.618532   67540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:58:50.640512   67540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:58:50.661926   67540 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:58:50.661988   67540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:58:50.680755   67540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:58:50.699010   67540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:58:50.848652   67540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:58:50.989334   67540 docker.go:234] disabling docker service ...
	I1001 18:58:50.989400   67540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:58:51.005002   67540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:58:51.019508   67540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:58:51.226884   67540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:58:51.367031   67540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:58:51.382558   67540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:58:51.404554   67540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1001 18:58:51.404627   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.417379   67540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:58:51.417486   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.430757   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.442679   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.454460   67540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:58:51.467902   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.480406   67540 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.502582   67540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:58:51.514859   67540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:58:51.525353   67540 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:58:51.525415   67540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:58:51.544495   67540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:58:51.556246   67540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:58:51.693870   67540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:58:51.812204   67540 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:58:51.812264   67540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:58:51.817342   67540 start.go:563] Will wait 60s for crictl version
	I1001 18:58:51.817407   67540 ssh_runner.go:195] Run: which crictl
	I1001 18:58:51.821086   67540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:58:51.864009   67540 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:58:51.864116   67540 ssh_runner.go:195] Run: crio --version
	I1001 18:58:51.893796   67540 ssh_runner.go:195] Run: crio --version
	I1001 18:58:51.924668   67540 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1001 18:58:52.614623   65598 system_pods.go:86] 7 kube-system pods found
	I1001 18:58:52.614660   65598 system_pods.go:89] "coredns-66bc5c9577-7vf5g" [4fa7511c-2db2-4fa5-b715-76edf1b24ba1] Running
	I1001 18:58:52.614669   65598 system_pods.go:89] "etcd-flannel-371776" [49168467-a09b-4a05-b9e9-4b42406f5f06] Running
	I1001 18:58:52.614675   65598 system_pods.go:89] "kube-apiserver-flannel-371776" [0762bc54-37c0-4468-93f6-755545f2b4f2] Running
	I1001 18:58:52.614682   65598 system_pods.go:89] "kube-controller-manager-flannel-371776" [380d242a-4990-4b87-8753-aadcafe84e74] Running
	I1001 18:58:52.614688   65598 system_pods.go:89] "kube-proxy-dwvbq" [8074c9db-40f9-4e00-a7c3-a21764ba9456] Running
	I1001 18:58:52.614713   65598 system_pods.go:89] "kube-scheduler-flannel-371776" [692824fa-3fb7-4527-9ccb-6735ad640eab] Running
	I1001 18:58:52.614724   65598 system_pods.go:89] "storage-provisioner" [5c7f7812-432a-4f22-9845-f04ddee89e8e] Running
	I1001 18:58:52.614736   65598 system_pods.go:126] duration metric: took 17.428611263s to wait for k8s-apps to be running ...
	I1001 18:58:52.614750   65598 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:58:52.614816   65598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:58:52.638659   65598 system_svc.go:56] duration metric: took 23.900451ms WaitForService to wait for kubelet
	I1001 18:58:52.638694   65598 kubeadm.go:578] duration metric: took 25.373056825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:58:52.638745   65598 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:58:52.643340   65598 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:58:52.643364   65598 node_conditions.go:123] node cpu capacity is 2
	I1001 18:58:52.643376   65598 node_conditions.go:105] duration metric: took 4.617138ms to run NodePressure ...
	I1001 18:58:52.643387   65598 start.go:241] waiting for startup goroutines ...
	I1001 18:58:52.643393   65598 start.go:246] waiting for cluster config update ...
	I1001 18:58:52.643402   65598 start.go:255] writing updated cluster config ...
	I1001 18:58:52.643711   65598 ssh_runner.go:195] Run: rm -f paused
	I1001 18:58:52.649852   65598 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:58:52.654985   65598 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7vf5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.661343   65598 pod_ready.go:94] pod "coredns-66bc5c9577-7vf5g" is "Ready"
	I1001 18:58:52.661373   65598 pod_ready.go:86] duration metric: took 6.365302ms for pod "coredns-66bc5c9577-7vf5g" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.664967   65598 pod_ready.go:83] waiting for pod "etcd-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.670315   65598 pod_ready.go:94] pod "etcd-flannel-371776" is "Ready"
	I1001 18:58:52.670345   65598 pod_ready.go:86] duration metric: took 5.353768ms for pod "etcd-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.673959   65598 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.681842   65598 pod_ready.go:94] pod "kube-apiserver-flannel-371776" is "Ready"
	I1001 18:58:52.681875   65598 pod_ready.go:86] duration metric: took 7.892229ms for pod "kube-apiserver-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:52.684407   65598 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:53.055210   65598 pod_ready.go:94] pod "kube-controller-manager-flannel-371776" is "Ready"
	I1001 18:58:53.055240   65598 pod_ready.go:86] duration metric: took 370.796484ms for pod "kube-controller-manager-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:53.255744   65598 pod_ready.go:83] waiting for pod "kube-proxy-dwvbq" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:53.654505   65598 pod_ready.go:94] pod "kube-proxy-dwvbq" is "Ready"
	I1001 18:58:53.654542   65598 pod_ready.go:86] duration metric: took 398.766339ms for pod "kube-proxy-dwvbq" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:53.857143   65598 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:54.255343   65598 pod_ready.go:94] pod "kube-scheduler-flannel-371776" is "Ready"
	I1001 18:58:54.255384   65598 pod_ready.go:86] duration metric: took 398.20214ms for pod "kube-scheduler-flannel-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:58:54.255401   65598 pod_ready.go:40] duration metric: took 1.605517612s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:58:54.307303   65598 start.go:620] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1001 18:58:54.309249   65598 out.go:179] * Done! kubectl is now configured to use "flannel-371776" cluster and "default" namespace by default
	I1001 18:58:51.925726   67540 main.go:141] libmachine: (bridge-371776) Calling .GetIP
	I1001 18:58:51.929125   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:51.929531   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:58:51.929559   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:58:51.929850   67540 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 18:58:51.934300   67540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:58:51.948725   67540 kubeadm.go:875] updating cluster {Name:bridge-371776 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-371
776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:58:51.948838   67540 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:58:51.948887   67540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:58:51.983936   67540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1001 18:58:51.984008   67540 ssh_runner.go:195] Run: which lz4
	I1001 18:58:51.988323   67540 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 18:58:51.993448   67540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 18:58:51.993475   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1001 18:58:53.489569   67540 crio.go:462] duration metric: took 1.501285735s to copy over tarball
	I1001 18:58:53.489642   67540 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 18:58:55.155944   67540 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666274132s)
	I1001 18:58:55.155973   67540 crio.go:469] duration metric: took 1.666380684s to extract the tarball
	I1001 18:58:55.155980   67540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 18:58:55.199807   67540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:58:55.243985   67540 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:58:55.244024   67540 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:58:55.244035   67540 kubeadm.go:926] updating node { 192.168.72.202 8443 v1.34.1 crio true true} ...
	I1001 18:58:55.244142   67540 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-371776 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:bridge-371776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1001 18:58:55.244225   67540 ssh_runner.go:195] Run: crio config
	I1001 18:58:55.290974   67540 cni.go:84] Creating CNI manager for "bridge"
	I1001 18:58:55.291003   67540 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:58:55.291029   67540 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.202 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-371776 NodeName:bridge-371776 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:58:55.291177   67540 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-371776"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.202"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.202"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:58:55.291251   67540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1001 18:58:55.303510   67540 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:58:55.303595   67540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:58:55.315785   67540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 18:58:55.337575   67540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:58:55.357915   67540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1001 18:58:55.377347   67540 ssh_runner.go:195] Run: grep 192.168.72.202	control-plane.minikube.internal$ /etc/hosts
	I1001 18:58:55.381547   67540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:58:55.396035   67540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:58:55.533955   67540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:58:55.570791   67540 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776 for IP: 192.168.72.202
	I1001 18:58:55.570809   67540 certs.go:194] generating shared ca certs ...
	I1001 18:58:55.570825   67540 certs.go:226] acquiring lock for ca certs: {Name:mkce5c4f8bce1e11a833f05c4b70f07050ce8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.570981   67540 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key
	I1001 18:58:55.571053   67540 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key
	I1001 18:58:55.571067   67540 certs.go:256] generating profile certs ...
	I1001 18:58:55.571128   67540 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.key
	I1001 18:58:55.571154   67540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt with IP's: []
	I1001 18:58:55.799831   67540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt ...
	I1001 18:58:55.799856   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: {Name:mke1311dca819bac7096037345c8d06c64773aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.800061   67540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.key ...
	I1001 18:58:55.800076   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.key: {Name:mk80f355c7782995564663da02b1d196cebbd9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.800189   67540 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key.91edd4d7
	I1001 18:58:55.800207   67540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt.91edd4d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.202]
	I1001 18:58:55.834506   67540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt.91edd4d7 ...
	I1001 18:58:55.834530   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt.91edd4d7: {Name:mkbda0e4dee0305624e8df296ba1c934f0d6fa8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.834730   67540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key.91edd4d7 ...
	I1001 18:58:55.834759   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key.91edd4d7: {Name:mkec88b000cb9b45ad1bc7e419969f007dc117b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.834875   67540 certs.go:381] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt.91edd4d7 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt
	I1001 18:58:55.834966   67540 certs.go:385] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key.91edd4d7 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key
	I1001 18:58:55.835026   67540 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.key
	I1001 18:58:55.835044   67540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.crt with IP's: []
	I1001 18:58:55.938408   67540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.crt ...
	I1001 18:58:55.938453   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.crt: {Name:mk243bb6092d48a7820ce088ee676a1022e0ceff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.938646   67540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.key ...
	I1001 18:58:55.938661   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.key: {Name:mk09ad4c469205d2baf1da07b0456d65dc8abdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:58:55.938871   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem (1338 bytes)
	W1001 18:58:55.938907   67540 certs.go:480] ignoring /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469_empty.pem, impossibly tiny 0 bytes
	I1001 18:58:55.938916   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:58:55.938936   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:58:55.938957   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:58:55.938979   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem (1675 bytes)
	I1001 18:58:55.939016   67540 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:58:55.939531   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:58:55.973021   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:58:56.001132   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:58:56.029672   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:58:56.059059   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 18:58:56.087390   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 18:58:56.116556   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:58:56.144900   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 18:58:56.174669   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /usr/share/ca-certificates/134692.pem (1708 bytes)
	I1001 18:58:56.203079   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:58:56.231921   67540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem --> /usr/share/ca-certificates/13469.pem (1338 bytes)
	I1001 18:58:56.266201   67540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:58:56.286270   67540 ssh_runner.go:195] Run: openssl version
	I1001 18:58:56.294050   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134692.pem && ln -fs /usr/share/ca-certificates/134692.pem /etc/ssl/certs/134692.pem"
	I1001 18:58:56.309032   67540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134692.pem
	I1001 18:58:56.314354   67540 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 17:56 /usr/share/ca-certificates/134692.pem
	I1001 18:58:56.314409   67540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134692.pem
	I1001 18:58:56.321913   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134692.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 18:58:56.335064   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:58:56.348479   67540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:58:56.353350   67540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:58:56.353407   67540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:58:56.360289   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:58:56.372780   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13469.pem && ln -fs /usr/share/ca-certificates/13469.pem /etc/ssl/certs/13469.pem"
	I1001 18:58:56.385166   67540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13469.pem
	I1001 18:58:56.390004   67540 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 17:56 /usr/share/ca-certificates/13469.pem
	I1001 18:58:56.390052   67540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13469.pem
	I1001 18:58:56.396946   67540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13469.pem /etc/ssl/certs/51391683.0"
	I1001 18:58:56.411203   67540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:58:56.415764   67540 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 18:58:56.415822   67540 kubeadm.go:392] StartCluster: {Name:bridge-371776 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-371776
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:58:56.415888   67540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:58:56.415947   67540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:58:56.454358   67540 cri.go:89] found id: ""
	I1001 18:58:56.454454   67540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:58:56.468524   67540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:58:56.480472   67540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:58:56.492350   67540 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:58:56.492370   67540 kubeadm.go:157] found existing configuration files:
	
	I1001 18:58:56.492420   67540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:58:56.502665   67540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:58:56.502727   67540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:58:56.514551   67540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:58:56.525017   67540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:58:56.525069   67540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:58:56.536642   67540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:58:56.546873   67540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:58:56.546924   67540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:58:56.557847   67540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:58:56.568616   67540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:58:56.568680   67540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:58:56.580068   67540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 18:58:56.723882   67540 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 18:59:10.194419   67540 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I1001 18:59:10.194510   67540 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 18:59:10.194619   67540 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 18:59:10.194764   67540 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 18:59:10.194885   67540 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 18:59:10.194968   67540 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 18:59:10.196509   67540 out.go:252]   - Generating certificates and keys ...
	I1001 18:59:10.196574   67540 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 18:59:10.196665   67540 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 18:59:10.196729   67540 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 18:59:10.196779   67540 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 18:59:10.196840   67540 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 18:59:10.196895   67540 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 18:59:10.196969   67540 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 18:59:10.197070   67540 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-371776 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	I1001 18:59:10.197147   67540 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 18:59:10.197345   67540 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-371776 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	I1001 18:59:10.197407   67540 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 18:59:10.197511   67540 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 18:59:10.197608   67540 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 18:59:10.197690   67540 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 18:59:10.197749   67540 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 18:59:10.197809   67540 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 18:59:10.197882   67540 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 18:59:10.198007   67540 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 18:59:10.198071   67540 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 18:59:10.198190   67540 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 18:59:10.198297   67540 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 18:59:10.199833   67540 out.go:252]   - Booting up control plane ...
	I1001 18:59:10.199914   67540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 18:59:10.199994   67540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 18:59:10.200088   67540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 18:59:10.200187   67540 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 18:59:10.200271   67540 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1001 18:59:10.200365   67540 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1001 18:59:10.200502   67540 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 18:59:10.200552   67540 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 18:59:10.200779   67540 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 18:59:10.200880   67540 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 18:59:10.200931   67540 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.50140761s
	I1001 18:59:10.201064   67540 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1001 18:59:10.201157   67540 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.202:8443/livez
	I1001 18:59:10.201286   67540 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1001 18:59:10.201401   67540 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1001 18:59:10.201508   67540 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.150433614s
	I1001 18:59:10.201639   67540 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.793986715s
	I1001 18:59:10.201739   67540 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501608257s
	I1001 18:59:10.201880   67540 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 18:59:10.202017   67540 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 18:59:10.202074   67540 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 18:59:10.202226   67540 kubeadm.go:310] [mark-control-plane] Marking the node bridge-371776 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 18:59:10.202278   67540 kubeadm.go:310] [bootstrap-token] Using token: d3qjho.6c8lexqa03t92te8
	I1001 18:59:10.203584   67540 out.go:252]   - Configuring RBAC rules ...
	I1001 18:59:10.203701   67540 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 18:59:10.203797   67540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 18:59:10.203950   67540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 18:59:10.204135   67540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 18:59:10.204256   67540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 18:59:10.204385   67540 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 18:59:10.204585   67540 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 18:59:10.204655   67540 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 18:59:10.204723   67540 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 18:59:10.204732   67540 kubeadm.go:310] 
	I1001 18:59:10.204822   67540 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 18:59:10.204837   67540 kubeadm.go:310] 
	I1001 18:59:10.204939   67540 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 18:59:10.204951   67540 kubeadm.go:310] 
	I1001 18:59:10.204987   67540 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 18:59:10.205089   67540 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 18:59:10.205220   67540 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 18:59:10.205236   67540 kubeadm.go:310] 
	I1001 18:59:10.205318   67540 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 18:59:10.205328   67540 kubeadm.go:310] 
	I1001 18:59:10.205405   67540 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 18:59:10.205415   67540 kubeadm.go:310] 
	I1001 18:59:10.205522   67540 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 18:59:10.205641   67540 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 18:59:10.205745   67540 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 18:59:10.205759   67540 kubeadm.go:310] 
	I1001 18:59:10.205886   67540 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 18:59:10.206005   67540 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 18:59:10.206016   67540 kubeadm.go:310] 
	I1001 18:59:10.206131   67540 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d3qjho.6c8lexqa03t92te8 \
	I1001 18:59:10.206293   67540 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbcb137d3fae8b26e7a39819525d4d9dcd5cccec4e46324317306fb87c30e08c \
	I1001 18:59:10.206328   67540 kubeadm.go:310] 	--control-plane 
	I1001 18:59:10.206340   67540 kubeadm.go:310] 
	I1001 18:59:10.206489   67540 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 18:59:10.206499   67540 kubeadm.go:310] 
	I1001 18:59:10.206620   67540 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d3qjho.6c8lexqa03t92te8 \
	I1001 18:59:10.206767   67540 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bbcb137d3fae8b26e7a39819525d4d9dcd5cccec4e46324317306fb87c30e08c 
	I1001 18:59:10.206780   67540 cni.go:84] Creating CNI manager for "bridge"
	I1001 18:59:10.208268   67540 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:59:10.209448   67540 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:59:10.223132   67540 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:59:10.246040   67540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:59:10.246215   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:10.246222   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-371776 minikube.k8s.io/updated_at=2025_10_01T18_59_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492 minikube.k8s.io/name=bridge-371776 minikube.k8s.io/primary=true
	I1001 18:59:10.409843   67540 ops.go:34] apiserver oom_adj: -16
	I1001 18:59:10.409941   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:10.910733   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:11.410389   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:11.910134   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:12.410869   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:12.910570   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:13.410417   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:13.910679   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:14.410274   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:14.910340   67540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:59:15.002026   67540 kubeadm.go:1105] duration metric: took 4.75588505s to wait for elevateKubeSystemPrivileges
	I1001 18:59:15.002065   67540 kubeadm.go:394] duration metric: took 18.586245013s to StartCluster
	I1001 18:59:15.002087   67540 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:59:15.002169   67540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:59:15.003271   67540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:59:15.003541   67540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 18:59:15.003547   67540 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:59:15.003627   67540 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:59:15.003715   67540 addons.go:69] Setting storage-provisioner=true in profile "bridge-371776"
	I1001 18:59:15.003732   67540 addons.go:238] Setting addon storage-provisioner=true in "bridge-371776"
	I1001 18:59:15.003792   67540 host.go:66] Checking if "bridge-371776" exists ...
	I1001 18:59:15.003798   67540 config.go:182] Loaded profile config "bridge-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:59:15.003740   67540 addons.go:69] Setting default-storageclass=true in profile "bridge-371776"
	I1001 18:59:15.003892   67540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-371776"
	I1001 18:59:15.004211   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:59:15.004261   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:59:15.004295   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:59:15.004342   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:59:15.005251   67540 out.go:179] * Verifying Kubernetes components...
	I1001 18:59:15.006569   67540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:59:15.018267   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I1001 18:59:15.018831   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:59:15.019359   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:59:15.019385   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:59:15.019742   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:59:15.020320   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:59:15.020367   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:59:15.021481   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I1001 18:59:15.022028   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:59:15.022619   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:59:15.022647   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:59:15.023024   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:59:15.023215   67540 main.go:141] libmachine: (bridge-371776) Calling .GetState
	I1001 18:59:15.026907   67540 addons.go:238] Setting addon default-storageclass=true in "bridge-371776"
	I1001 18:59:15.026947   67540 host.go:66] Checking if "bridge-371776" exists ...
	I1001 18:59:15.027281   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:59:15.027324   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:59:15.035553   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43293
	I1001 18:59:15.036188   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:59:15.036788   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:59:15.036804   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:59:15.037172   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:59:15.037378   67540 main.go:141] libmachine: (bridge-371776) Calling .GetState
	I1001 18:59:15.039309   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:59:15.040868   67540 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:59:15.041134   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I1001 18:59:15.041622   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:59:15.042113   67540 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:59:15.042130   67540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:59:15.042145   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:59:15.042258   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:59:15.042273   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:59:15.042674   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:59:15.043143   67540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:59:15.043202   67540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:59:15.045505   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:59:15.045993   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:59:15.046024   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:59:15.046213   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:59:15.046379   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:59:15.046516   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:59:15.046651   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:59:15.057503   67540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I1001 18:59:15.058073   67540 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:59:15.058561   67540 main.go:141] libmachine: Using API Version  1
	I1001 18:59:15.058594   67540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:59:15.058993   67540 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:59:15.059200   67540 main.go:141] libmachine: (bridge-371776) Calling .GetState
	I1001 18:59:15.061344   67540 main.go:141] libmachine: (bridge-371776) Calling .DriverName
	I1001 18:59:15.061592   67540 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:59:15.061606   67540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:59:15.061618   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHHostname
	I1001 18:59:15.064875   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:59:15.065525   67540 main.go:141] libmachine: (bridge-371776) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:ca:2f", ip: ""} in network mk-bridge-371776: {Iface:virbr4 ExpiryTime:2025-10-01 19:58:43 +0000 UTC Type:0 Mac:52:54:00:5d:ca:2f Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:bridge-371776 Clientid:01:52:54:00:5d:ca:2f}
	I1001 18:59:15.065574   67540 main.go:141] libmachine: (bridge-371776) DBG | domain bridge-371776 has defined IP address 192.168.72.202 and MAC address 52:54:00:5d:ca:2f in network mk-bridge-371776
	I1001 18:59:15.065794   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHPort
	I1001 18:59:15.065995   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHKeyPath
	I1001 18:59:15.066152   67540 main.go:141] libmachine: (bridge-371776) Calling .GetSSHUsername
	I1001 18:59:15.066340   67540 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/bridge-371776/id_rsa Username:docker}
	I1001 18:59:15.263451   67540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 18:59:15.323589   67540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:59:15.490438   67540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:59:15.501837   67540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:59:15.922783   67540 start.go:976] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1001 18:59:15.924002   67540 node_ready.go:35] waiting up to 15m0s for node "bridge-371776" to be "Ready" ...
	I1001 18:59:15.937326   67540 node_ready.go:49] node "bridge-371776" is "Ready"
	I1001 18:59:15.937354   67540 node_ready.go:38] duration metric: took 13.318937ms for node "bridge-371776" to be "Ready" ...
	I1001 18:59:15.937366   67540 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:59:15.937413   67540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:59:16.465498   67540 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-371776" context rescaled to 1 replicas
	I1001 18:59:16.635664   67540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.145178802s)
	I1001 18:59:16.635719   67540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133852459s)
	I1001 18:59:16.635752   67540 main.go:141] libmachine: Making call to close driver server
	I1001 18:59:16.635763   67540 main.go:141] libmachine: (bridge-371776) Calling .Close
	I1001 18:59:16.635775   67540 api_server.go:72] duration metric: took 1.632202079s to wait for apiserver process to appear ...
	I1001 18:59:16.635788   67540 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:59:16.635724   67540 main.go:141] libmachine: Making call to close driver server
	I1001 18:59:16.635819   67540 api_server.go:253] Checking apiserver healthz at https://192.168.72.202:8443/healthz ...
	I1001 18:59:16.635821   67540 main.go:141] libmachine: (bridge-371776) Calling .Close
	I1001 18:59:16.636091   67540 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:59:16.636109   67540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:59:16.636113   67540 main.go:141] libmachine: (bridge-371776) DBG | Closing plugin on server side
	I1001 18:59:16.636118   67540 main.go:141] libmachine: Making call to close driver server
	I1001 18:59:16.636127   67540 main.go:141] libmachine: (bridge-371776) Calling .Close
	I1001 18:59:16.636236   67540 main.go:141] libmachine: (bridge-371776) DBG | Closing plugin on server side
	I1001 18:59:16.636284   67540 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:59:16.636300   67540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:59:16.636311   67540 main.go:141] libmachine: Making call to close driver server
	I1001 18:59:16.636326   67540 main.go:141] libmachine: (bridge-371776) Calling .Close
	I1001 18:59:16.636406   67540 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:59:16.636420   67540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:59:16.636686   67540 main.go:141] libmachine: (bridge-371776) DBG | Closing plugin on server side
	I1001 18:59:16.636757   67540 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:59:16.636784   67540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:59:16.656393   67540 api_server.go:279] https://192.168.72.202:8443/healthz returned 200:
	ok
	I1001 18:59:16.663709   67540 api_server.go:141] control plane version: v1.34.1
	I1001 18:59:16.663740   67540 api_server.go:131] duration metric: took 27.942762ms to wait for apiserver health ...
	I1001 18:59:16.663750   67540 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:59:16.670531   67540 main.go:141] libmachine: Making call to close driver server
	I1001 18:59:16.670601   67540 main.go:141] libmachine: (bridge-371776) Calling .Close
	I1001 18:59:16.670932   67540 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:59:16.670949   67540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:59:16.670952   67540 main.go:141] libmachine: (bridge-371776) DBG | Closing plugin on server side
	I1001 18:59:16.673774   67540 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1001 18:59:16.674203   67540 system_pods.go:59] 8 kube-system pods found
	I1001 18:59:16.674232   67540 system_pods.go:61] "coredns-66bc5c9577-5h45r" [617a9e5a-77f9-4877-b476-75ce04f8a758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.674239   67540 system_pods.go:61] "coredns-66bc5c9577-mtqp9" [f750c81c-e3e3-4192-931c-ffc491d42159] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.674248   67540 system_pods.go:61] "etcd-bridge-371776" [535f15c6-38b6-4ab0-807c-2e53be20917d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:59:16.674254   67540 system_pods.go:61] "kube-apiserver-bridge-371776" [a1dca077-b971-4518-be33-848e99e6db36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:59:16.674261   67540 system_pods.go:61] "kube-controller-manager-bridge-371776" [42ac3a87-8463-486d-a195-eeac58ccc52c] Running
	I1001 18:59:16.674266   67540 system_pods.go:61] "kube-proxy-2c44n" [81117754-6541-41c0-84f4-70a87cce846f] Running
	I1001 18:59:16.674271   67540 system_pods.go:61] "kube-scheduler-bridge-371776" [31c53577-17b9-418f-9f20-95e19bdc93ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:59:16.674277   67540 system_pods.go:61] "storage-provisioner" [f244db9a-11af-4fa6-9544-afc0f5adbc19] Pending
	I1001 18:59:16.674287   67540 system_pods.go:74] duration metric: took 10.527561ms to wait for pod list to return data ...
	I1001 18:59:16.674300   67540 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:59:16.674890   67540 addons.go:514] duration metric: took 1.671264949s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 18:59:16.679807   67540 default_sa.go:45] found service account: "default"
	I1001 18:59:16.679838   67540 default_sa.go:55] duration metric: took 5.524462ms for default service account to be created ...
	I1001 18:59:16.679850   67540 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:59:16.684354   67540 system_pods.go:86] 8 kube-system pods found
	I1001 18:59:16.684390   67540 system_pods.go:89] "coredns-66bc5c9577-5h45r" [617a9e5a-77f9-4877-b476-75ce04f8a758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.684401   67540 system_pods.go:89] "coredns-66bc5c9577-mtqp9" [f750c81c-e3e3-4192-931c-ffc491d42159] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.684411   67540 system_pods.go:89] "etcd-bridge-371776" [535f15c6-38b6-4ab0-807c-2e53be20917d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:59:16.684419   67540 system_pods.go:89] "kube-apiserver-bridge-371776" [a1dca077-b971-4518-be33-848e99e6db36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:59:16.684453   67540 system_pods.go:89] "kube-controller-manager-bridge-371776" [42ac3a87-8463-486d-a195-eeac58ccc52c] Running
	I1001 18:59:16.684461   67540 system_pods.go:89] "kube-proxy-2c44n" [81117754-6541-41c0-84f4-70a87cce846f] Running
	I1001 18:59:16.684474   67540 system_pods.go:89] "kube-scheduler-bridge-371776" [31c53577-17b9-418f-9f20-95e19bdc93ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:59:16.684483   67540 system_pods.go:89] "storage-provisioner" [f244db9a-11af-4fa6-9544-afc0f5adbc19] Pending
	I1001 18:59:16.684516   67540 retry.go:31] will retry after 278.228476ms: missing components: kube-dns
	I1001 18:59:16.975847   67540 system_pods.go:86] 8 kube-system pods found
	I1001 18:59:16.975882   67540 system_pods.go:89] "coredns-66bc5c9577-5h45r" [617a9e5a-77f9-4877-b476-75ce04f8a758] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.975894   67540 system_pods.go:89] "coredns-66bc5c9577-mtqp9" [f750c81c-e3e3-4192-931c-ffc491d42159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:59:16.975925   67540 system_pods.go:89] "etcd-bridge-371776" [535f15c6-38b6-4ab0-807c-2e53be20917d] Running
	I1001 18:59:16.975935   67540 system_pods.go:89] "kube-apiserver-bridge-371776" [a1dca077-b971-4518-be33-848e99e6db36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:59:16.975941   67540 system_pods.go:89] "kube-controller-manager-bridge-371776" [42ac3a87-8463-486d-a195-eeac58ccc52c] Running
	I1001 18:59:16.975949   67540 system_pods.go:89] "kube-proxy-2c44n" [81117754-6541-41c0-84f4-70a87cce846f] Running
	I1001 18:59:16.975956   67540 system_pods.go:89] "kube-scheduler-bridge-371776" [31c53577-17b9-418f-9f20-95e19bdc93ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:59:16.975966   67540 system_pods.go:89] "storage-provisioner" [f244db9a-11af-4fa6-9544-afc0f5adbc19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:59:16.975976   67540 system_pods.go:126] duration metric: took 296.119069ms to wait for k8s-apps to be running ...
	I1001 18:59:16.975989   67540 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:59:16.976053   67540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:59:17.023650   67540 system_svc.go:56] duration metric: took 47.651342ms WaitForService to wait for kubelet
	I1001 18:59:17.023689   67540 kubeadm.go:578] duration metric: took 2.02011573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:59:17.023713   67540 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:59:17.031848   67540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:59:17.031886   67540 node_conditions.go:123] node cpu capacity is 2
	I1001 18:59:17.031905   67540 node_conditions.go:105] duration metric: took 8.186021ms to run NodePressure ...
	I1001 18:59:17.031919   67540 start.go:241] waiting for startup goroutines ...
	I1001 18:59:17.031928   67540 start.go:246] waiting for cluster config update ...
	I1001 18:59:17.031943   67540 start.go:255] writing updated cluster config ...
	I1001 18:59:17.032333   67540 ssh_runner.go:195] Run: rm -f paused
	I1001 18:59:17.046697   67540 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:59:17.052456   67540 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5h45r" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:59:19.059303   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:21.059673   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:23.559072   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:26.058782   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:28.059217   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:30.558442   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:32.559014   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:34.559649   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:37.057775   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:39.059187   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:41.559405   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:44.058636   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:46.558701   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:49.058457   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:51.058958   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	W1001 18:59:53.059051   67540 pod_ready.go:104] pod "coredns-66bc5c9577-5h45r" is not "Ready", error: <nil>
	I1001 18:59:54.556826   67540 pod_ready.go:94] pod "coredns-66bc5c9577-5h45r" is "Ready"
	I1001 18:59:54.556849   67540 pod_ready.go:86] duration metric: took 37.504363067s for pod "coredns-66bc5c9577-5h45r" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.556857   67540 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mtqp9" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.559163   67540 pod_ready.go:99] pod "coredns-66bc5c9577-mtqp9" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-mtqp9" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-mtqp9" not found
	I1001 18:59:54.559175   67540 pod_ready.go:86] duration metric: took 2.313413ms for pod "coredns-66bc5c9577-mtqp9" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.562231   67540 pod_ready.go:83] waiting for pod "etcd-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.570077   67540 pod_ready.go:94] pod "etcd-bridge-371776" is "Ready"
	I1001 18:59:54.570101   67540 pod_ready.go:86] duration metric: took 7.85545ms for pod "etcd-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.663099   67540 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.669530   67540 pod_ready.go:94] pod "kube-apiserver-bridge-371776" is "Ready"
	I1001 18:59:54.669553   67540 pod_ready.go:86] duration metric: took 6.430327ms for pod "kube-apiserver-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.672716   67540 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:54.956297   67540 pod_ready.go:94] pod "kube-controller-manager-bridge-371776" is "Ready"
	I1001 18:59:54.956330   67540 pod_ready.go:86] duration metric: took 283.592397ms for pod "kube-controller-manager-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:55.156519   67540 pod_ready.go:83] waiting for pod "kube-proxy-2c44n" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:55.556685   67540 pod_ready.go:94] pod "kube-proxy-2c44n" is "Ready"
	I1001 18:59:55.556716   67540 pod_ready.go:86] duration metric: took 400.172496ms for pod "kube-proxy-2c44n" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:55.758200   67540 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:56.155775   67540 pod_ready.go:94] pod "kube-scheduler-bridge-371776" is "Ready"
	I1001 18:59:56.155810   67540 pod_ready.go:86] duration metric: took 397.580268ms for pod "kube-scheduler-bridge-371776" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:59:56.155823   67540 pod_ready.go:40] duration metric: took 39.109096835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:59:56.198833   67540 start.go:620] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1001 18:59:56.200256   67540 out.go:179] * Done! kubectl is now configured to use "bridge-371776" cluster and "default" namespace by default
	I1001 19:01:44.218802   55504 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1001 19:01:44.218878   55504 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 19:01:44.219267   55504 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I1001 19:01:44.219321   55504 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:01:44.219412   55504 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:01:44.219587   55504 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:01:44.219738   55504 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:01:44.219824   55504 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:01:44.221504   55504 out.go:252]   - Generating certificates and keys ...
	I1001 19:01:44.221600   55504 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:01:44.221698   55504 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:01:44.221766   55504 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 19:01:44.221816   55504 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 19:01:44.221884   55504 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 19:01:44.221928   55504 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 19:01:44.221989   55504 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 19:01:44.222043   55504 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 19:01:44.222108   55504 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 19:01:44.222173   55504 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 19:01:44.222203   55504 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 19:01:44.222250   55504 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:01:44.222291   55504 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:01:44.222336   55504 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:01:44.222379   55504 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:01:44.222453   55504 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:01:44.222519   55504 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:01:44.222627   55504 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:01:44.222722   55504 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:01:44.224932   55504 out.go:252]   - Booting up control plane ...
	I1001 19:01:44.225022   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:01:44.225105   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:01:44.225160   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:01:44.225244   55504 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:01:44.225374   55504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1001 19:01:44.225512   55504 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1001 19:01:44.225588   55504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:01:44.225632   55504 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:01:44.225748   55504 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:01:44.225874   55504 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:01:44.225924   55504 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50237891s
	I1001 19:01:44.226007   55504 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1001 19:01:44.226087   55504 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	I1001 19:01:44.226185   55504 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1001 19:01:44.226270   55504 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1001 19:01:44.226333   55504 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.339292385s
	I1001 19:01:44.226439   55504 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000266317s
	I1001 19:01:44.226522   55504 kubeadm.go:310] [control-plane-check] kube-scheduler is not healthy after 4m0.00062019s
	I1001 19:01:44.226524   55504 kubeadm.go:310] 
	I1001 19:01:44.226626   55504 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I1001 19:01:44.226748   55504 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 19:01:44.226819   55504 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1001 19:01:44.226907   55504 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 19:01:44.226993   55504 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1001 19:01:44.227109   55504 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1001 19:01:44.227126   55504 kubeadm.go:310] 
	W1001 19:01:44.227216   55504 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50237891s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.339292385s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000266317s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00062019s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1001 19:01:44.227260   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1001 19:01:45.730208   55504 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.502923576s)
	I1001 19:01:45.730276   55504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:01:45.747993   55504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:01:45.759301   55504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:01:45.759312   55504 kubeadm.go:157] found existing configuration files:
	
	I1001 19:01:45.759352   55504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:01:45.770125   55504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:01:45.770160   55504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:01:45.783021   55504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:01:45.795195   55504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:01:45.795235   55504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:01:45.805964   55504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:01:45.815858   55504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:01:45.815916   55504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:01:45.826927   55504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:01:45.837161   55504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:01:45.837203   55504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:01:45.848194   55504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 19:01:45.892892   55504 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I1001 19:01:45.892962   55504 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:01:45.988124   55504 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:01:45.988250   55504 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:01:45.988392   55504 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:01:45.997868   55504 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:01:46.000249   55504 out.go:252]   - Generating certificates and keys ...
	I1001 19:01:46.000350   55504 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:01:46.000462   55504 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:01:46.000540   55504 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 19:01:46.000589   55504 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 19:01:46.000666   55504 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 19:01:46.000713   55504 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 19:01:46.000767   55504 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 19:01:46.000816   55504 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 19:01:46.000882   55504 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 19:01:46.000949   55504 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 19:01:46.000980   55504 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 19:01:46.001032   55504 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:01:46.350193   55504 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:01:46.456026   55504 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:01:46.650940   55504 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:01:46.816106   55504 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:01:46.914179   55504 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:01:46.914558   55504 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:01:46.916822   55504 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:01:46.918608   55504 out.go:252]   - Booting up control plane ...
	I1001 19:01:46.918697   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:01:46.918789   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:01:46.918911   55504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:01:46.939028   55504 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:01:46.939146   55504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1001 19:01:46.948950   55504 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1001 19:01:46.949056   55504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:01:46.949114   55504 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:01:47.110451   55504 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:01:47.110603   55504 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:01:47.613094   55504 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.766507ms
	I1001 19:01:47.615006   55504 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1001 19:01:47.615090   55504 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	I1001 19:01:47.615199   55504 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1001 19:01:47.615313   55504 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1001 19:01:49.438349   55504 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.823340032s
	I1001 19:05:47.618105   55504 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	I1001 19:05:47.618600   55504 kubeadm.go:310] [control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	I1001 19:05:47.618614   55504 kubeadm.go:310] 
	I1001 19:05:47.618689   55504 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
	I1001 19:05:47.618759   55504 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1001 19:05:47.618834   55504 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1001 19:05:47.618915   55504 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1001 19:05:47.619010   55504 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1001 19:05:47.619124   55504 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1001 19:05:47.619129   55504 kubeadm.go:310] 
	I1001 19:05:47.621266   55504 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:05:47.621684   55504 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1001 19:05:47.621741   55504 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1001 19:05:47.621794   55504 kubeadm.go:394] duration metric: took 12m23.443140153s to StartCluster
	I1001 19:05:47.621825   55504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1001 19:05:47.621872   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 19:05:47.666072   55504 cri.go:89] found id: ""
	I1001 19:05:47.666084   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.666090   55504 logs.go:284] No container was found matching "kube-apiserver"
	I1001 19:05:47.666095   55504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1001 19:05:47.666151   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 19:05:47.702915   55504 cri.go:89] found id: "dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5"
	I1001 19:05:47.702925   55504 cri.go:89] found id: ""
	I1001 19:05:47.702931   55504 logs.go:282] 1 containers: [dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5]
	I1001 19:05:47.702980   55504 ssh_runner.go:195] Run: which crictl
	I1001 19:05:47.707768   55504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1001 19:05:47.707831   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 19:05:47.743532   55504 cri.go:89] found id: ""
	I1001 19:05:47.743546   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.743553   55504 logs.go:284] No container was found matching "coredns"
	I1001 19:05:47.743559   55504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1001 19:05:47.743608   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 19:05:47.780987   55504 cri.go:89] found id: ""
	I1001 19:05:47.781001   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.781008   55504 logs.go:284] No container was found matching "kube-scheduler"
	I1001 19:05:47.781013   55504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1001 19:05:47.781060   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 19:05:47.819217   55504 cri.go:89] found id: ""
	I1001 19:05:47.819231   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.819237   55504 logs.go:284] No container was found matching "kube-proxy"
	I1001 19:05:47.819241   55504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 19:05:47.819290   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 19:05:47.857303   55504 cri.go:89] found id: "b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25"
	I1001 19:05:47.857313   55504 cri.go:89] found id: ""
	I1001 19:05:47.857319   55504 logs.go:282] 1 containers: [b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25]
	I1001 19:05:47.857365   55504 ssh_runner.go:195] Run: which crictl
	I1001 19:05:47.861530   55504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1001 19:05:47.861583   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 19:05:47.896423   55504 cri.go:89] found id: ""
	I1001 19:05:47.896450   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.896461   55504 logs.go:284] No container was found matching "kindnet"
	I1001 19:05:47.896465   55504 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1001 19:05:47.896518   55504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 19:05:47.935154   55504 cri.go:89] found id: ""
	I1001 19:05:47.935168   55504 logs.go:282] 0 containers: []
	W1001 19:05:47.935175   55504 logs.go:284] No container was found matching "storage-provisioner"
	I1001 19:05:47.935182   55504 logs.go:123] Gathering logs for describe nodes ...
	I1001 19:05:47.935192   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1001 19:05:48.007682   55504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1001 19:05:48.007694   55504 logs.go:123] Gathering logs for etcd [dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5] ...
	I1001 19:05:48.007706   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5"
	I1001 19:05:48.050527   55504 logs.go:123] Gathering logs for kube-controller-manager [b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25] ...
	I1001 19:05:48.050544   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25"
	I1001 19:05:48.087452   55504 logs.go:123] Gathering logs for CRI-O ...
	I1001 19:05:48.087467   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1001 19:05:48.295346   55504 logs.go:123] Gathering logs for container status ...
	I1001 19:05:48.295364   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 19:05:48.335994   55504 logs.go:123] Gathering logs for kubelet ...
	I1001 19:05:48.336014   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 19:05:48.445698   55504 logs.go:123] Gathering logs for dmesg ...
	I1001 19:05:48.445716   55504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1001 19:05:48.461813   55504 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.766507ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.823340032s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1001 19:05:48.461866   55504 out.go:285] * 
	W1001 19:05:48.461938   55504 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.766507ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.823340032s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 19:05:48.461958   55504 out.go:285] * 
	W1001 19:05:48.463906   55504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 19:05:48.467216   55504 out.go:203] 
	W1001 19:05:48.468505   55504 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.766507ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.32:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is healthy after 1.823340032s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000898637s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001774743s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.32:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.50.32:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1001 19:05:48.468528   55504 out.go:285] * 
	I1001 19:05:48.469921   55504 out.go:203] 
	
	
	==> CRI-O <==
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.144800412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edc1ec02-d1a6-423a-a6bf-679298c3842c name=/runtime.v1.RuntimeService/Version
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.145762128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee6c7c4e-210f-456d-a7b6-8fca495485fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.146114612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759345549146096075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee6c7c4e-210f-456d-a7b6-8fca495485fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.146644494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b9860d2-1294-415a-9c1e-a04e0e09c616 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.146697344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b9860d2-1294-415a-9c1e-a04e0e09c616 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.146769604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25,PodSandboxId:70dd735ceb6c218318cf1b4e821b0f15b788ef803b92465468d01a6a82d030f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:19,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759345530414155457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 044c88854ee7462ee05fc4c1fcc8b119,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.contain
er.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 19,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5,PodSandboxId:33699ccf53e47f48572b02ecdd1471af735c24f9085dafe43e9a9c112d663cb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759345308034064427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c29f5e05076c01ced7239fe3f2ace8f,},Anno
tations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b9860d2-1294-415a-9c1e-a04e0e09c616 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.167700678Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b3fa6915-15b6-47ef-bdd4-47899e2269a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.168676414Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:70dd735ceb6c218318cf1b4e821b0f15b788ef803b92465468d01a6a82d030f2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-cert-expiration-252396,Uid:044c88854ee7462ee05fc4c1fcc8b119,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759345307855814701,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 044c88854ee7462ee05fc4c1fcc8b119,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 044c88854ee7462ee05fc4c1fcc8b119,kubernetes.io/config.seen: 2025-10-01T19:01:47.378325674Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06af41da1837f47883a6b682cb352cd2a16217a5c07ab1a7fd2848763de793cf,Metadat
a:&PodSandboxMetadata{Name:kube-apiserver-cert-expiration-252396,Uid:b56d806db2c01eb74f042e8e03ad778a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759345307842979999,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56d806db2c01eb74f042e8e03ad778a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8443,kubernetes.io/config.hash: b56d806db2c01eb74f042e8e03ad778a,kubernetes.io/config.seen: 2025-10-01T19:01:47.378324754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4130bea708f0d082b5848140f209d4a9ed28d674feb935c06efecd5c823ad79,Metadata:&PodSandboxMetadata{Name:kube-scheduler-cert-expiration-252396,Uid:afa65b62923bdd7691c550320166c98e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759345307825997100,Labels:map[st
ring]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa65b62923bdd7691c550320166c98e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: afa65b62923bdd7691c550320166c98e,kubernetes.io/config.seen: 2025-10-01T19:01:47.378320462Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33699ccf53e47f48572b02ecdd1471af735c24f9085dafe43e9a9c112d663cb0,Metadata:&PodSandboxMetadata{Name:etcd-cert-expiration-252396,Uid:8c29f5e05076c01ced7239fe3f2ace8f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759345307825113881,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c29f5e05076c01ced7239fe3f2ace8f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.adv
ertise-client-urls: https://192.168.50.32:2379,kubernetes.io/config.hash: 8c29f5e05076c01ced7239fe3f2ace8f,kubernetes.io/config.seen: 2025-10-01T19:01:47.378323603Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b3fa6915-15b6-47ef-bdd4-47899e2269a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.170111488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7452ebb-e05d-484a-8faa-c62c9fd1f0b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.170184448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7452ebb-e05d-484a-8faa-c62c9fd1f0b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.170303922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25,PodSandboxId:70dd735ceb6c218318cf1b4e821b0f15b788ef803b92465468d01a6a82d030f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:19,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759345530414155457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 044c88854ee7462ee05fc4c1fcc8b119,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.contain
er.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 19,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5,PodSandboxId:33699ccf53e47f48572b02ecdd1471af735c24f9085dafe43e9a9c112d663cb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759345308034064427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c29f5e05076c01ced7239fe3f2ace8f,},Anno
tations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7452ebb-e05d-484a-8faa-c62c9fd1f0b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.185569196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2089612-c816-4d1e-bf1b-1f2f888a37b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.185683824Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2089612-c816-4d1e-bf1b-1f2f888a37b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.187002608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef8c09ab-6253-4b33-8667-81f920b1f6e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.187498303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759345549187476381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef8c09ab-6253-4b33-8667-81f920b1f6e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.188159699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f42d3883-fc8b-4d15-868d-36875f97dc68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.188243131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f42d3883-fc8b-4d15-868d-36875f97dc68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.188334733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25,PodSandboxId:70dd735ceb6c218318cf1b4e821b0f15b788ef803b92465468d01a6a82d030f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:19,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759345530414155457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 044c88854ee7462ee05fc4c1fcc8b119,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.contain
er.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 19,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5,PodSandboxId:33699ccf53e47f48572b02ecdd1471af735c24f9085dafe43e9a9c112d663cb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759345308034064427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c29f5e05076c01ced7239fe3f2ace8f,},Anno
tations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f42d3883-fc8b-4d15-868d-36875f97dc68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.221077295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9e06607-3adf-4aaa-ad3d-b595b25076e0 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.221170046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9e06607-3adf-4aaa-ad3d-b595b25076e0 name=/runtime.v1.RuntimeService/Version
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.222961844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=684b4a1d-e171-4eb7-9394-98c7b55481b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.223542182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759345549223520786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=684b4a1d-e171-4eb7-9394-98c7b55481b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.223993549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b22c161-57ac-4de0-b601-2f70822e4c7f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.224090388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b22c161-57ac-4de0-b601-2f70822e4c7f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 19:05:49 cert-expiration-252396 crio[3307]: time="2025-10-01 19:05:49.224201522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25,PodSandboxId:70dd735ceb6c218318cf1b4e821b0f15b788ef803b92465468d01a6a82d030f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:19,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759345530414155457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 044c88854ee7462ee05fc4c1fcc8b119,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.contain
er.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 19,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5,PodSandboxId:33699ccf53e47f48572b02ecdd1471af735c24f9085dafe43e9a9c112d663cb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759345308034064427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-252396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c29f5e05076c01ced7239fe3f2ace8f,},Anno
tations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b22c161-57ac-4de0-b601-2f70822e4c7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b0e3baf9fff99       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 seconds ago      Exited              kube-controller-manager   19                  70dd735ceb6c2       kube-controller-manager-cert-expiration-252396
	dbecebc7a0eb8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   4 minutes ago       Running             etcd                      5                   33699ccf53e47       etcd-cert-expiration-252396
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.464981] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 1 18:49] kauditd_printk_skb: 192 callbacks suppressed
	[  +6.340972] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 18:53] kauditd_printk_skb: 316 callbacks suppressed
	[  +4.547932] kauditd_printk_skb: 186 callbacks suppressed
	[  +0.138972] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.954280] kauditd_printk_skb: 55 callbacks suppressed
	[Oct 1 18:54] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.657802] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 18:55] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 18:57] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.321051] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.116359] kauditd_printk_skb: 82 callbacks suppressed
	[ +11.381942] kauditd_printk_skb: 20 callbacks suppressed
	[Oct 1 18:58] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.326639] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 18:59] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 19:00] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 19:01] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.426306] kauditd_printk_skb: 108 callbacks suppressed
	[Oct 1 19:02] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.181659] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 19:03] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 19:04] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 1 19:05] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [dbecebc7a0eb894558790e3a354c249af091883b9a6a21ab86ce5ae77842fec5] <==
	{"level":"info","ts":"2025-10-01T19:01:48.885293Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"fbd4dd8524dacdec is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-01T19:01:48.885341Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"fbd4dd8524dacdec became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-01T19:01:48.885389Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 1"}
	{"level":"info","ts":"2025-10-01T19:01:48.885401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"fbd4dd8524dacdec has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-01T19:01:48.885414Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"fbd4dd8524dacdec became candidate at term 2"}
	{"level":"info","ts":"2025-10-01T19:01:48.888632Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2025-10-01T19:01:48.888716Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"fbd4dd8524dacdec has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-01T19:01:48.888746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"fbd4dd8524dacdec became leader at term 2"}
	{"level":"info","ts":"2025-10-01T19:01:48.888766Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2025-10-01T19:01:48.889899Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:cert-expiration-252396 ClientURLs:[https://192.168.50.32:2379]}","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-01T19:01:48.890078Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-01T19:01:48.890203Z","caller":"etcdserver/server.go:2404","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-01T19:01:48.890343Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-01T19:01:48.895158Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-01T19:01:48.895466Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-01T19:01:48.895585Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-01T19:01:48.898393Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-01T19:01:48.894757Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-01T19:01:48.898653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-01T19:01:48.898704Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-01T19:01:48.898767Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-01T19:01:48.898788Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-01T19:01:48.898851Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-10-01T19:01:48.898893Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-10-01T19:01:48.899385Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	
	
	==> kernel <==
	 19:05:49 up 17 min,  0 users,  load average: 0.06, 0.08, 0.10
	Linux cert-expiration-252396 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-controller-manager [b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25] <==
	I1001 19:05:31.204923       1 serving.go:386] Generated self-signed cert in-memory
	I1001 19:05:31.960416       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1001 19:05:31.960457       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:05:31.961910       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 19:05:31.962080       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 19:05:31.962159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1001 19:05:31.962211       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1001 19:05:41.964190       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.32:8443/healthz\": dial tcp 192.168.50.32:8443: connect: connection refused"
	
	
	==> kubelet <==
	Oct 01 19:05:37 cert-expiration-252396 kubelet[11881]: E1001 19:05:37.231257   11881 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.50.32:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="cert-expiration-252396"
	Oct 01 19:05:37 cert-expiration-252396 kubelet[11881]: E1001 19:05:37.485395   11881 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759345537484974060  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 19:05:37 cert-expiration-252396 kubelet[11881]: E1001 19:05:37.485436   11881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759345537484974060  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.402544   11881 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-252396\" not found" node="cert-expiration-252396"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.402842   11881 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-252396\" not found" node="cert-expiration-252396"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.410144   11881 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-cert-expiration-252396_kube-system_afa65b62923bdd7691c550320166c98e_1\" is already in use by 0238d0674241790d6b8f0231462196fed81f5c63d643a08118dc4311a2bbf112. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="f4130bea708f0d082b5848140f209d4a9ed28d674feb935c06efecd5c823ad79"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.410411   11881 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-cert-expiration-252396_kube-system(afa65b62923bdd7691c550320166c98e): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-cert-expiration-252396_kube-system_afa65b62923bdd7691c550320166c98e_1\" is already in use by 0238d0674241790d6b8f0231462196fed81f5c63d643a08118dc4311a2bbf112. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.410510   11881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-cert-expiration-252396_kube-system_afa65b62923bdd7691c550320166c98e_1\\\" is already in use by 0238d0674241790d6b8f0231462196fed81f5c63d643a08118dc4311a2bbf112. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-cert-expiration-252396" podUID="afa65b62923bdd7691c550320166c98e"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.412336   11881 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-cert-expiration-252396_kube-system_b56d806db2c01eb74f042e8e03ad778a_1\" is already in use by a3471fe1c0c90ea92c85607d8ddccf6afe32f1092f2449444ff51b914b6671e2. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="06af41da1837f47883a6b682cb352cd2a16217a5c07ab1a7fd2848763de793cf"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.412438   11881 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-apiserver start failed in pod kube-apiserver-cert-expiration-252396_kube-system(b56d806db2c01eb74f042e8e03ad778a): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-cert-expiration-252396_kube-system_b56d806db2c01eb74f042e8e03ad778a_1\" is already in use by a3471fe1c0c90ea92c85607d8ddccf6afe32f1092f2449444ff51b914b6671e2. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 01 19:05:38 cert-expiration-252396 kubelet[11881]: E1001 19:05:38.412475   11881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-cert-expiration-252396_kube-system_b56d806db2c01eb74f042e8e03ad778a_1\\\" is already in use by a3471fe1c0c90ea92c85607d8ddccf6afe32f1092f2449444ff51b914b6671e2. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-cert-expiration-252396" podUID="b56d806db2c01eb74f042e8e03ad778a"
	Oct 01 19:05:41 cert-expiration-252396 kubelet[11881]: E1001 19:05:41.202962   11881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.50.32:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.32:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-252396.186a733c563b2027  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-252396,UID:cert-expiration-252396,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node cert-expiration-252396 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:cert-expiration-252396,},FirstTimestamp:2025-10-01 19:01:47.423547431 +0000 UTC m=+0.321849893,LastTimestamp:2025-10-01 19:01:47.423547431 +0000 UTC m=+0.321849893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIns
tance:cert-expiration-252396,}"
	Oct 01 19:05:42 cert-expiration-252396 kubelet[11881]: I1001 19:05:42.148747   11881 scope.go:117] "RemoveContainer" containerID="f69f6dcba4a65c36e09367718fa2434a45e37b00c6adf67d7e31a9e885f47dc3"
	Oct 01 19:05:42 cert-expiration-252396 kubelet[11881]: E1001 19:05:42.150872   11881 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-252396\" not found" node="cert-expiration-252396"
	Oct 01 19:05:42 cert-expiration-252396 kubelet[11881]: I1001 19:05:42.150927   11881 scope.go:117] "RemoveContainer" containerID="b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25"
	Oct 01 19:05:42 cert-expiration-252396 kubelet[11881]: E1001 19:05:42.151034   11881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-252396_kube-system(044c88854ee7462ee05fc4c1fcc8b119)\"" pod="kube-system/kube-controller-manager-cert-expiration-252396" podUID="044c88854ee7462ee05fc4c1fcc8b119"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: E1001 19:05:44.034442   11881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.32:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-252396?timeout=10s\": dial tcp 192.168.50.32:8443: connect: connection refused" interval="7s"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: I1001 19:05:44.233476   11881 kubelet_node_status.go:75] "Attempting to register node" node="cert-expiration-252396"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: E1001 19:05:44.233918   11881 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.50.32:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="cert-expiration-252396"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: E1001 19:05:44.834895   11881 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-252396\" not found" node="cert-expiration-252396"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: I1001 19:05:44.834954   11881 scope.go:117] "RemoveContainer" containerID="b0e3baf9fff99b69837f96f3aa448804ad492fd1081b08a14b4e31b8371c3f25"
	Oct 01 19:05:44 cert-expiration-252396 kubelet[11881]: E1001 19:05:44.835093   11881 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-252396_kube-system(044c88854ee7462ee05fc4c1fcc8b119)\"" pod="kube-system/kube-controller-manager-cert-expiration-252396" podUID="044c88854ee7462ee05fc4c1fcc8b119"
	Oct 01 19:05:47 cert-expiration-252396 kubelet[11881]: E1001 19:05:47.487186   11881 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759345547486622913  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 19:05:47 cert-expiration-252396 kubelet[11881]: E1001 19:05:47.487279   11881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759345547486622913  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 19:05:48 cert-expiration-252396 kubelet[11881]: E1001 19:05:48.111275   11881 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.50.32:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.32:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-252396 -n cert-expiration-252396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-252396 -n cert-expiration-252396: exit status 2 (218.561809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "cert-expiration-252396" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-252396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-252396
--- FAIL: TestCertExpiration (1077.99s)

                                                
                                    
x
+
TestPreload (163.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-569778 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1001 18:41:06.816020   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-569778 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m38.145013677s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-569778 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-569778 image pull gcr.io/k8s-minikube/busybox: (3.144259357s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-569778
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-569778: (6.788258598s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-569778 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-569778 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.823879321s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-569778 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-01 18:42:16.558902661 +0000 UTC m=+3304.743927217
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-569778 -n test-preload-569778
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-569778 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-569778 logs -n 25: (1.139540697s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-388877 ssh -n multinode-388877-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ ssh     │ multinode-388877 ssh -n multinode-388877 sudo cat /home/docker/cp-test_multinode-388877-m03_multinode-388877.txt                                                                    │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ cp      │ multinode-388877 cp multinode-388877-m03:/home/docker/cp-test.txt multinode-388877-m02:/home/docker/cp-test_multinode-388877-m03_multinode-388877-m02.txt                           │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ ssh     │ multinode-388877 ssh -n multinode-388877-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ ssh     │ multinode-388877 ssh -n multinode-388877-m02 sudo cat /home/docker/cp-test_multinode-388877-m03_multinode-388877-m02.txt                                                            │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ node    │ multinode-388877 node stop m03                                                                                                                                                      │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:28 UTC │
	│ node    │ multinode-388877 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:28 UTC │ 01 Oct 25 18:29 UTC │
	│ node    │ list -p multinode-388877                                                                                                                                                            │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:29 UTC │                     │
	│ stop    │ -p multinode-388877                                                                                                                                                                 │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:29 UTC │ 01 Oct 25 18:31 UTC │
	│ start   │ -p multinode-388877 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │ 01 Oct 25 18:34 UTC │
	│ node    │ list -p multinode-388877                                                                                                                                                            │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:34 UTC │                     │
	│ node    │ multinode-388877 node delete m03                                                                                                                                                    │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:34 UTC │ 01 Oct 25 18:34 UTC │
	│ stop    │ multinode-388877 stop                                                                                                                                                               │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:34 UTC │ 01 Oct 25 18:37 UTC │
	│ start   │ -p multinode-388877 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:38 UTC │
	│ node    │ list -p multinode-388877                                                                                                                                                            │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:38 UTC │                     │
	│ start   │ -p multinode-388877-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-388877-m02 │ jenkins │ v1.37.0 │ 01 Oct 25 18:38 UTC │                     │
	│ start   │ -p multinode-388877-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-388877-m03 │ jenkins │ v1.37.0 │ 01 Oct 25 18:38 UTC │ 01 Oct 25 18:39 UTC │
	│ node    │ add -p multinode-388877                                                                                                                                                             │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:39 UTC │                     │
	│ delete  │ -p multinode-388877-m03                                                                                                                                                             │ multinode-388877-m03 │ jenkins │ v1.37.0 │ 01 Oct 25 18:39 UTC │ 01 Oct 25 18:39 UTC │
	│ delete  │ -p multinode-388877                                                                                                                                                                 │ multinode-388877     │ jenkins │ v1.37.0 │ 01 Oct 25 18:39 UTC │ 01 Oct 25 18:39 UTC │
	│ start   │ -p test-preload-569778 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-569778  │ jenkins │ v1.37.0 │ 01 Oct 25 18:39 UTC │ 01 Oct 25 18:41 UTC │
	│ image   │ test-preload-569778 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-569778  │ jenkins │ v1.37.0 │ 01 Oct 25 18:41 UTC │ 01 Oct 25 18:41 UTC │
	│ stop    │ -p test-preload-569778                                                                                                                                                              │ test-preload-569778  │ jenkins │ v1.37.0 │ 01 Oct 25 18:41 UTC │ 01 Oct 25 18:41 UTC │
	│ start   │ -p test-preload-569778 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-569778  │ jenkins │ v1.37.0 │ 01 Oct 25 18:41 UTC │ 01 Oct 25 18:42 UTC │
	│ image   │ test-preload-569778 image list                                                                                                                                                      │ test-preload-569778  │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:41:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:41:23.559144   44544 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:41:23.559363   44544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:41:23.559371   44544 out.go:374] Setting ErrFile to fd 2...
	I1001 18:41:23.559375   44544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:41:23.559579   44544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:41:23.560001   44544 out.go:368] Setting JSON to false
	I1001 18:41:23.560820   44544 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5028,"bootTime":1759339056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:41:23.560904   44544 start.go:140] virtualization: kvm guest
	I1001 18:41:23.562935   44544 out.go:179] * [test-preload-569778] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:41:23.564380   44544 notify.go:220] Checking for updates...
	I1001 18:41:23.564406   44544 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:41:23.565768   44544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:41:23.566826   44544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:41:23.567888   44544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:41:23.569017   44544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:41:23.570249   44544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:41:23.571676   44544 config.go:182] Loaded profile config "test-preload-569778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1001 18:41:23.572024   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:41:23.572072   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:41:23.585142   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I1001 18:41:23.585663   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:41:23.586212   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:41:23.586236   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:41:23.586606   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:41:23.586772   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:23.588568   44544 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1001 18:41:23.589778   44544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:41:23.590142   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:41:23.590208   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:41:23.602823   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I1001 18:41:23.603231   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:41:23.603737   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:41:23.603756   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:41:23.604047   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:41:23.604217   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:23.636062   44544 out.go:179] * Using the kvm2 driver based on existing profile
	I1001 18:41:23.637227   44544 start.go:304] selected driver: kvm2
	I1001 18:41:23.637241   44544 start.go:921] validating driver "kvm2" against &{Name:test-preload-569778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:test-preload-569778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:41:23.637347   44544 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:41:23.638039   44544 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:41:23.638129   44544 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:41:23.651256   44544 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:41:23.651283   44544 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:41:23.664559   44544 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:41:23.664914   44544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:41:23.664947   44544 cni.go:84] Creating CNI manager for ""
	I1001 18:41:23.664986   44544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:41:23.665032   44544 start.go:348] cluster config:
	{Name:test-preload-569778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-569778 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:41:23.665124   44544 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:41:23.667025   44544 out.go:179] * Starting "test-preload-569778" primary control-plane node in "test-preload-569778" cluster
	I1001 18:41:23.668497   44544 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1001 18:41:24.067453   44544 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1001 18:41:24.067482   44544 cache.go:58] Caching tarball of preloaded images
	I1001 18:41:24.067631   44544 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1001 18:41:24.069380   44544 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1001 18:41:24.070675   44544 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1001 18:41:24.169333   44544 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1001 18:41:24.169378   44544 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1001 18:41:33.644699   44544 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1001 18:41:33.644841   44544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/config.json ...
	I1001 18:41:33.665149   44544 start.go:360] acquireMachinesLock for test-preload-569778: {Name:mk9cde4a6dd309a36e894aa2ddacad5574ffdbe7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:41:33.665227   44544 start.go:364] duration metric: took 44.621µs to acquireMachinesLock for "test-preload-569778"
	I1001 18:41:33.665241   44544 start.go:96] Skipping create...Using existing machine configuration
	I1001 18:41:33.665247   44544 fix.go:54] fixHost starting: 
	I1001 18:41:33.665581   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:41:33.665621   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:41:33.678996   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1001 18:41:33.679404   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:41:33.679877   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:41:33.679899   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:41:33.680222   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:41:33.680417   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:33.680571   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetState
	I1001 18:41:33.682475   44544 fix.go:112] recreateIfNeeded on test-preload-569778: state=Stopped err=<nil>
	I1001 18:41:33.682512   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	W1001 18:41:33.682685   44544 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 18:41:33.684623   44544 out.go:252] * Restarting existing kvm2 VM for "test-preload-569778" ...
	I1001 18:41:33.684651   44544 main.go:141] libmachine: (test-preload-569778) Calling .Start
	I1001 18:41:33.684795   44544 main.go:141] libmachine: (test-preload-569778) starting domain...
	I1001 18:41:33.684818   44544 main.go:141] libmachine: (test-preload-569778) ensuring networks are active...
	I1001 18:41:33.685518   44544 main.go:141] libmachine: (test-preload-569778) Ensuring network default is active
	I1001 18:41:33.685896   44544 main.go:141] libmachine: (test-preload-569778) Ensuring network mk-test-preload-569778 is active
	I1001 18:41:33.686346   44544 main.go:141] libmachine: (test-preload-569778) getting domain XML...
	I1001 18:41:33.687403   44544 main.go:141] libmachine: (test-preload-569778) DBG | starting domain XML:
	I1001 18:41:33.687424   44544 main.go:141] libmachine: (test-preload-569778) DBG | <domain type='kvm'>
	I1001 18:41:33.687447   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <name>test-preload-569778</name>
	I1001 18:41:33.687463   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <uuid>82803052-7172-447b-9537-8746783776e4</uuid>
	I1001 18:41:33.687476   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <memory unit='KiB'>3145728</memory>
	I1001 18:41:33.687495   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1001 18:41:33.687508   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <vcpu placement='static'>2</vcpu>
	I1001 18:41:33.687518   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <os>
	I1001 18:41:33.687537   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1001 18:41:33.687552   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <boot dev='cdrom'/>
	I1001 18:41:33.687563   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <boot dev='hd'/>
	I1001 18:41:33.687574   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <bootmenu enable='no'/>
	I1001 18:41:33.687582   44544 main.go:141] libmachine: (test-preload-569778) DBG |   </os>
	I1001 18:41:33.687608   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <features>
	I1001 18:41:33.687619   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <acpi/>
	I1001 18:41:33.687630   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <apic/>
	I1001 18:41:33.687637   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <pae/>
	I1001 18:41:33.687643   44544 main.go:141] libmachine: (test-preload-569778) DBG |   </features>
	I1001 18:41:33.687653   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1001 18:41:33.687659   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <clock offset='utc'/>
	I1001 18:41:33.687672   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <on_poweroff>destroy</on_poweroff>
	I1001 18:41:33.687683   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <on_reboot>restart</on_reboot>
	I1001 18:41:33.687694   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <on_crash>destroy</on_crash>
	I1001 18:41:33.687702   44544 main.go:141] libmachine: (test-preload-569778) DBG |   <devices>
	I1001 18:41:33.687728   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1001 18:41:33.687752   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <disk type='file' device='cdrom'>
	I1001 18:41:33.687785   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <driver name='qemu' type='raw'/>
	I1001 18:41:33.687812   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/boot2docker.iso'/>
	I1001 18:41:33.687824   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <target dev='hdc' bus='scsi'/>
	I1001 18:41:33.687830   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <readonly/>
	I1001 18:41:33.687842   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1001 18:41:33.687853   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </disk>
	I1001 18:41:33.687874   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <disk type='file' device='disk'>
	I1001 18:41:33.687886   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1001 18:41:33.687901   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/test-preload-569778.rawdisk'/>
	I1001 18:41:33.687915   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <target dev='hda' bus='virtio'/>
	I1001 18:41:33.687923   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1001 18:41:33.687932   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </disk>
	I1001 18:41:33.687941   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1001 18:41:33.687967   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1001 18:41:33.687982   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </controller>
	I1001 18:41:33.687996   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1001 18:41:33.688007   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1001 18:41:33.688020   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1001 18:41:33.688027   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </controller>
	I1001 18:41:33.688035   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <interface type='network'>
	I1001 18:41:33.688047   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <mac address='52:54:00:0f:90:90'/>
	I1001 18:41:33.688067   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <source network='mk-test-preload-569778'/>
	I1001 18:41:33.688080   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <model type='virtio'/>
	I1001 18:41:33.688092   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1001 18:41:33.688104   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </interface>
	I1001 18:41:33.688112   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <interface type='network'>
	I1001 18:41:33.688119   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <mac address='52:54:00:c9:55:1a'/>
	I1001 18:41:33.688130   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <source network='default'/>
	I1001 18:41:33.688148   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <model type='virtio'/>
	I1001 18:41:33.688169   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1001 18:41:33.688197   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </interface>
	I1001 18:41:33.688208   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <serial type='pty'>
	I1001 18:41:33.688219   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <target type='isa-serial' port='0'>
	I1001 18:41:33.688231   44544 main.go:141] libmachine: (test-preload-569778) DBG |         <model name='isa-serial'/>
	I1001 18:41:33.688242   44544 main.go:141] libmachine: (test-preload-569778) DBG |       </target>
	I1001 18:41:33.688252   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </serial>
	I1001 18:41:33.688263   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <console type='pty'>
	I1001 18:41:33.688280   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <target type='serial' port='0'/>
	I1001 18:41:33.688290   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </console>
	I1001 18:41:33.688302   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <input type='mouse' bus='ps2'/>
	I1001 18:41:33.688313   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <input type='keyboard' bus='ps2'/>
	I1001 18:41:33.688326   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <audio id='1' type='none'/>
	I1001 18:41:33.688337   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <memballoon model='virtio'>
	I1001 18:41:33.688358   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1001 18:41:33.688371   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </memballoon>
	I1001 18:41:33.688382   44544 main.go:141] libmachine: (test-preload-569778) DBG |     <rng model='virtio'>
	I1001 18:41:33.688394   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <backend model='random'>/dev/random</backend>
	I1001 18:41:33.688406   44544 main.go:141] libmachine: (test-preload-569778) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1001 18:41:33.688416   44544 main.go:141] libmachine: (test-preload-569778) DBG |     </rng>
	I1001 18:41:33.688438   44544 main.go:141] libmachine: (test-preload-569778) DBG |   </devices>
	I1001 18:41:33.688454   44544 main.go:141] libmachine: (test-preload-569778) DBG | </domain>
	I1001 18:41:33.688467   44544 main.go:141] libmachine: (test-preload-569778) DBG | 
	I1001 18:41:34.932715   44544 main.go:141] libmachine: (test-preload-569778) waiting for domain to start...
	I1001 18:41:34.934089   44544 main.go:141] libmachine: (test-preload-569778) domain is now running
	I1001 18:41:34.934109   44544 main.go:141] libmachine: (test-preload-569778) waiting for IP...
	I1001 18:41:34.934906   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:34.935364   44544 main.go:141] libmachine: (test-preload-569778) found domain IP: 192.168.39.127
	I1001 18:41:34.935394   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has current primary IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:34.935424   44544 main.go:141] libmachine: (test-preload-569778) reserving static IP address...
	I1001 18:41:34.935928   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "test-preload-569778", mac: "52:54:00:0f:90:90", ip: "192.168.39.127"} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:39:50 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:34.935949   44544 main.go:141] libmachine: (test-preload-569778) reserved static IP address 192.168.39.127 for domain test-preload-569778
	I1001 18:41:34.935961   44544 main.go:141] libmachine: (test-preload-569778) DBG | skip adding static IP to network mk-test-preload-569778 - found existing host DHCP lease matching {name: "test-preload-569778", mac: "52:54:00:0f:90:90", ip: "192.168.39.127"}
	I1001 18:41:34.935972   44544 main.go:141] libmachine: (test-preload-569778) DBG | Getting to WaitForSSH function...
	I1001 18:41:34.935989   44544 main.go:141] libmachine: (test-preload-569778) waiting for SSH...
	I1001 18:41:34.938569   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:34.938972   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:39:50 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:34.939013   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:34.939151   44544 main.go:141] libmachine: (test-preload-569778) DBG | Using SSH client type: external
	I1001 18:41:34.939174   44544 main.go:141] libmachine: (test-preload-569778) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa (-rw-------)
	I1001 18:41:34.939221   44544 main.go:141] libmachine: (test-preload-569778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:41:34.939245   44544 main.go:141] libmachine: (test-preload-569778) DBG | About to run SSH command:
	I1001 18:41:34.939258   44544 main.go:141] libmachine: (test-preload-569778) DBG | exit 0
	I1001 18:41:45.213208   44544 main.go:141] libmachine: (test-preload-569778) DBG | SSH cmd err, output: exit status 255: 
	I1001 18:41:45.213232   44544 main.go:141] libmachine: (test-preload-569778) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1001 18:41:45.213239   44544 main.go:141] libmachine: (test-preload-569778) DBG | command : exit 0
	I1001 18:41:45.213244   44544 main.go:141] libmachine: (test-preload-569778) DBG | err     : exit status 255
	I1001 18:41:45.213252   44544 main.go:141] libmachine: (test-preload-569778) DBG | output  : 
	I1001 18:41:48.215375   44544 main.go:141] libmachine: (test-preload-569778) DBG | Getting to WaitForSSH function...
	I1001 18:41:48.218480   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.218891   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.218924   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.219164   44544 main.go:141] libmachine: (test-preload-569778) DBG | Using SSH client type: external
	I1001 18:41:48.219192   44544 main.go:141] libmachine: (test-preload-569778) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa (-rw-------)
	I1001 18:41:48.219219   44544 main.go:141] libmachine: (test-preload-569778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:41:48.219233   44544 main.go:141] libmachine: (test-preload-569778) DBG | About to run SSH command:
	I1001 18:41:48.219255   44544 main.go:141] libmachine: (test-preload-569778) DBG | exit 0
	I1001 18:41:48.355887   44544 main.go:141] libmachine: (test-preload-569778) DBG | SSH cmd err, output: <nil>: 
	I1001 18:41:48.356066   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetConfigRaw
	I1001 18:41:48.356811   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetIP
	I1001 18:41:48.359374   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.359894   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.359925   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.360161   44544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/config.json ...
	I1001 18:41:48.360393   44544 machine.go:93] provisionDockerMachine start ...
	I1001 18:41:48.360425   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:48.360625   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:48.363549   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.364058   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.364085   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.364305   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:48.364489   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.364659   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.364797   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:48.364989   44544 main.go:141] libmachine: Using SSH client type: native
	I1001 18:41:48.365198   44544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1001 18:41:48.365209   44544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 18:41:48.478886   44544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 18:41:48.478918   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetMachineName
	I1001 18:41:48.479134   44544 buildroot.go:166] provisioning hostname "test-preload-569778"
	I1001 18:41:48.479162   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetMachineName
	I1001 18:41:48.479371   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:48.482251   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.482604   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.482649   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.482802   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:48.482970   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.483124   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.483311   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:48.483506   44544 main.go:141] libmachine: Using SSH client type: native
	I1001 18:41:48.483805   44544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1001 18:41:48.483825   44544 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-569778 && echo "test-preload-569778" | sudo tee /etc/hostname
	I1001 18:41:48.617340   44544 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-569778
	
	I1001 18:41:48.617369   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:48.620602   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.621049   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.621088   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.621257   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:48.621467   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.621643   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.621799   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:48.621987   44544 main.go:141] libmachine: Using SSH client type: native
	I1001 18:41:48.622242   44544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1001 18:41:48.622267   44544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-569778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-569778/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-569778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:41:48.753299   44544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:41:48.753334   44544 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 18:41:48.753355   44544 buildroot.go:174] setting up certificates
	I1001 18:41:48.753365   44544 provision.go:84] configureAuth start
	I1001 18:41:48.753375   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetMachineName
	I1001 18:41:48.753731   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetIP
	I1001 18:41:48.756883   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.757282   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.757313   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.757522   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:48.760216   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.760645   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.760679   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.760841   44544 provision.go:143] copyHostCerts
	I1001 18:41:48.760896   44544 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem, removing ...
	I1001 18:41:48.760914   44544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:41:48.760986   44544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 18:41:48.761084   44544 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem, removing ...
	I1001 18:41:48.761093   44544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:41:48.761119   44544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 18:41:48.761173   44544 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem, removing ...
	I1001 18:41:48.761181   44544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:41:48.761204   44544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 18:41:48.761253   44544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.test-preload-569778 san=[127.0.0.1 192.168.39.127 localhost minikube test-preload-569778]
	I1001 18:41:48.835664   44544 provision.go:177] copyRemoteCerts
	I1001 18:41:48.835729   44544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:41:48.835752   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:48.838711   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.839153   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:48.839179   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:48.839393   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:48.839579   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:48.839781   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:48.839943   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:41:48.929407   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 18:41:48.957968   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 18:41:48.986998   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:41:49.014837   44544 provision.go:87] duration metric: took 261.460341ms to configureAuth
	I1001 18:41:49.014862   44544 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:41:49.015038   44544 config.go:182] Loaded profile config "test-preload-569778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1001 18:41:49.015122   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:49.018090   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.018563   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.018592   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.018815   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:49.018999   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.019135   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.019260   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:49.019415   44544 main.go:141] libmachine: Using SSH client type: native
	I1001 18:41:49.019635   44544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1001 18:41:49.019651   44544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:41:49.264981   44544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:41:49.265018   44544 machine.go:96] duration metric: took 904.609451ms to provisionDockerMachine
	I1001 18:41:49.265033   44544 start.go:293] postStartSetup for "test-preload-569778" (driver="kvm2")
	I1001 18:41:49.265046   44544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:41:49.265072   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:49.265381   44544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:41:49.265407   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:49.268327   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.268809   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.268850   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.269008   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:49.269238   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.269358   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:49.269519   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:41:49.359009   44544 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:41:49.363671   44544 info.go:137] Remote host: Buildroot 2025.02
	I1001 18:41:49.363695   44544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 18:41:49.363766   44544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 18:41:49.363850   44544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> 134692.pem in /etc/ssl/certs
	I1001 18:41:49.363936   44544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 18:41:49.375285   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:41:49.402647   44544 start.go:296] duration metric: took 137.587199ms for postStartSetup
	I1001 18:41:49.402680   44544 fix.go:56] duration metric: took 15.737434002s for fixHost
	I1001 18:41:49.402700   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:49.405683   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.406018   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.406044   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.406239   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:49.406486   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.406639   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.406779   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:49.406920   44544 main.go:141] libmachine: Using SSH client type: native
	I1001 18:41:49.407110   44544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I1001 18:41:49.407119   44544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:41:49.522708   44544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759344109.488073079
	
	I1001 18:41:49.522728   44544 fix.go:216] guest clock: 1759344109.488073079
	I1001 18:41:49.522738   44544 fix.go:229] Guest: 2025-10-01 18:41:49.488073079 +0000 UTC Remote: 2025-10-01 18:41:49.402683793 +0000 UTC m=+25.878889470 (delta=85.389286ms)
	I1001 18:41:49.522791   44544 fix.go:200] guest clock delta is within tolerance: 85.389286ms
	I1001 18:41:49.522799   44544 start.go:83] releasing machines lock for "test-preload-569778", held for 15.857564262s
	I1001 18:41:49.522825   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:49.523078   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetIP
	I1001 18:41:49.526285   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.526666   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.526696   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.526934   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:49.527460   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:49.527660   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:41:49.527731   44544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:41:49.527779   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:49.527839   44544 ssh_runner.go:195] Run: cat /version.json
	I1001 18:41:49.527862   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:41:49.531020   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.531234   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.531512   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.531550   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.531722   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:49.531748   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:49.531763   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:49.531932   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.532024   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:41:49.532132   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:49.532195   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:41:49.532252   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:41:49.532295   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:41:49.532407   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:41:49.645466   44544 ssh_runner.go:195] Run: systemctl --version
	I1001 18:41:49.651370   44544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:41:49.794250   44544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:41:49.800772   44544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:41:49.800849   44544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:41:49.819328   44544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:41:49.819350   44544 start.go:495] detecting cgroup driver to use...
	I1001 18:41:49.819414   44544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:41:49.838485   44544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:41:49.855838   44544 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:41:49.855895   44544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:41:49.874171   44544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:41:49.890038   44544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:41:50.036738   44544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:41:50.253741   44544 docker.go:234] disabling docker service ...
	I1001 18:41:50.253904   44544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:41:50.270289   44544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:41:50.284292   44544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:41:50.430161   44544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:41:50.566365   44544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:41:50.581860   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:41:50.603657   44544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 18:41:50.603720   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.615697   44544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:41:50.615754   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.627446   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.638977   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.651553   44544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:41:50.664585   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.676915   44544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.697457   44544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:41:50.710491   44544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:41:50.721123   44544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:41:50.721194   44544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:41:50.739684   44544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:41:50.751243   44544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:41:50.883674   44544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:41:50.989450   44544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:41:50.989531   44544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:41:50.994782   44544 start.go:563] Will wait 60s for crictl version
	I1001 18:41:50.994848   44544 ssh_runner.go:195] Run: which crictl
	I1001 18:41:50.998814   44544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:41:51.038579   44544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:41:51.038674   44544 ssh_runner.go:195] Run: crio --version
	I1001 18:41:51.068401   44544 ssh_runner.go:195] Run: crio --version
	I1001 18:41:51.098933   44544 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1001 18:41:51.100385   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetIP
	I1001 18:41:51.103316   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:51.103704   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:41:51.103725   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:41:51.103997   44544 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 18:41:51.108358   44544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:41:51.123280   44544 kubeadm.go:875] updating cluster {Name:test-preload-569778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test
-preload-569778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:41:51.123377   44544 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1001 18:41:51.123463   44544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:41:51.162359   44544 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1001 18:41:51.162457   44544 ssh_runner.go:195] Run: which lz4
	I1001 18:41:51.166745   44544 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 18:41:51.172649   44544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 18:41:51.172754   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1001 18:41:52.594669   44544 crio.go:462] duration metric: took 1.427954679s to copy over tarball
	I1001 18:41:52.594752   44544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 18:41:54.261300   44544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666520539s)
	I1001 18:41:54.261329   44544 crio.go:469] duration metric: took 1.666633556s to extract the tarball
	I1001 18:41:54.261336   44544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 18:41:54.301488   44544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:41:54.351238   44544 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:41:54.351273   44544 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:41:54.351283   44544 kubeadm.go:926] updating node { 192.168.39.127 8443 v1.32.0 crio true true} ...
	I1001 18:41:54.351402   44544 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-569778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-569778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:41:54.351496   44544 ssh_runner.go:195] Run: crio config
	I1001 18:41:54.396689   44544 cni.go:84] Creating CNI manager for ""
	I1001 18:41:54.396718   44544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:41:54.396730   44544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:41:54.396749   44544 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-569778 NodeName:test-preload-569778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:41:54.396868   44544 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-569778"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.127"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:41:54.396932   44544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1001 18:41:54.409039   44544 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:41:54.409128   44544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:41:54.420931   44544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1001 18:41:54.440576   44544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:41:54.459916   44544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1001 18:41:54.479988   44544 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I1001 18:41:54.483974   44544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:41:54.498269   44544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:41:54.633282   44544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:41:54.667241   44544 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778 for IP: 192.168.39.127
	I1001 18:41:54.667268   44544 certs.go:194] generating shared ca certs ...
	I1001 18:41:54.667290   44544 certs.go:226] acquiring lock for ca certs: {Name:mkce5c4f8bce1e11a833f05c4b70f07050ce8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:41:54.667485   44544 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key
	I1001 18:41:54.667566   44544 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key
	I1001 18:41:54.667585   44544 certs.go:256] generating profile certs ...
	I1001 18:41:54.667691   44544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.key
	I1001 18:41:54.667757   44544 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/apiserver.key.86b50fdc
	I1001 18:41:54.667813   44544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/proxy-client.key
	I1001 18:41:54.667950   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem (1338 bytes)
	W1001 18:41:54.667990   44544 certs.go:480] ignoring /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469_empty.pem, impossibly tiny 0 bytes
	I1001 18:41:54.668001   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:41:54.668040   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:41:54.668073   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:41:54.668107   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem (1675 bytes)
	I1001 18:41:54.668170   44544 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:41:54.668834   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:41:54.701588   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:41:54.744527   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:41:54.775589   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:41:54.803835   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1001 18:41:54.832592   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 18:41:54.861246   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:41:54.889559   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:41:54.917808   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem --> /usr/share/ca-certificates/13469.pem (1338 bytes)
	I1001 18:41:54.945679   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /usr/share/ca-certificates/134692.pem (1708 bytes)
	I1001 18:41:54.973350   44544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:41:55.001047   44544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:41:55.020847   44544 ssh_runner.go:195] Run: openssl version
	I1001 18:41:55.027159   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:41:55.039414   44544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:41:55.044635   44544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:41:55.044682   44544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:41:55.051701   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:41:55.064341   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13469.pem && ln -fs /usr/share/ca-certificates/13469.pem /etc/ssl/certs/13469.pem"
	I1001 18:41:55.077007   44544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13469.pem
	I1001 18:41:55.082252   44544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 17:56 /usr/share/ca-certificates/13469.pem
	I1001 18:41:55.082312   44544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13469.pem
	I1001 18:41:55.089501   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13469.pem /etc/ssl/certs/51391683.0"
	I1001 18:41:55.102336   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134692.pem && ln -fs /usr/share/ca-certificates/134692.pem /etc/ssl/certs/134692.pem"
	I1001 18:41:55.114929   44544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134692.pem
	I1001 18:41:55.119979   44544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 17:56 /usr/share/ca-certificates/134692.pem
	I1001 18:41:55.120026   44544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134692.pem
	I1001 18:41:55.127004   44544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134692.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 18:41:55.139635   44544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:41:55.144694   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 18:41:55.151785   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 18:41:55.158712   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 18:41:55.165593   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 18:41:55.172633   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 18:41:55.179510   44544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 18:41:55.186359   44544 kubeadm.go:392] StartCluster: {Name:test-preload-569778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-pr
eload-569778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:41:55.186465   44544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:41:55.186513   44544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:41:55.224647   44544 cri.go:89] found id: ""
	I1001 18:41:55.224720   44544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:41:55.236790   44544 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 18:41:55.236810   44544 kubeadm.go:589] restartPrimaryControlPlane start ...
	I1001 18:41:55.236860   44544 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 18:41:55.247945   44544 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:41:55.248416   44544 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-569778" does not appear in /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:41:55.248572   44544 kubeconfig.go:62] /home/jenkins/minikube-integration/21631-9542/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-569778" cluster setting kubeconfig missing "test-preload-569778" context setting]
	I1001 18:41:55.249091   44544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:41:55.249805   44544 kapi.go:59] client config for test-preload-569778: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:41:55.250302   44544 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1001 18:41:55.250322   44544 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1001 18:41:55.250328   44544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 18:41:55.250334   44544 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1001 18:41:55.250339   44544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 18:41:55.250738   44544 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 18:41:55.261512   44544 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.127
	I1001 18:41:55.261539   44544 kubeadm.go:1152] stopping kube-system containers ...
	I1001 18:41:55.261552   44544 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 18:41:55.261615   44544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:41:55.297545   44544 cri.go:89] found id: ""
	I1001 18:41:55.297647   44544 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 18:41:55.316109   44544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:41:55.327801   44544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:41:55.327824   44544 kubeadm.go:157] found existing configuration files:
	
	I1001 18:41:55.327878   44544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:41:55.338202   44544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:41:55.338265   44544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:41:55.349247   44544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:41:55.359424   44544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:41:55.359491   44544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:41:55.370617   44544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:41:55.380968   44544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:41:55.381028   44544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:41:55.392105   44544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:41:55.402905   44544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:41:55.402976   44544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:41:55.414295   44544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:41:55.425642   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:41:55.485125   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:41:56.657381   44544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.17221623s)
	I1001 18:41:56.657423   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:41:56.905626   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:41:56.975897   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:41:57.048691   44544 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:41:57.048770   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:57.549627   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:58.049144   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:58.549127   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:59.049106   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:59.549411   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:41:59.580001   44544 api_server.go:72] duration metric: took 2.531326456s to wait for apiserver process to appear ...
	I1001 18:41:59.580026   44544 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:41:59.580041   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:02.074034   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:42:02.074070   44544 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:42:02.074086   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:02.182474   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:02.182510   44544 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:02.182527   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:02.188866   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:02.188895   44544 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:02.580423   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:02.584548   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:02.584574   44544 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:03.080205   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:03.085469   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:03.085499   44544 api_server.go:103] status: https://192.168.39.127:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:03.580335   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:03.588266   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I1001 18:42:03.594815   44544 api_server.go:141] control plane version: v1.32.0
	I1001 18:42:03.594849   44544 api_server.go:131] duration metric: took 4.014817441s to wait for apiserver health ...
	I1001 18:42:03.594858   44544 cni.go:84] Creating CNI manager for ""
	I1001 18:42:03.594863   44544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:42:03.596183   44544 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:42:03.597357   44544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:42:03.613130   44544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:42:03.640243   44544 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:42:03.645391   44544 system_pods.go:59] 7 kube-system pods found
	I1001 18:42:03.645442   44544 system_pods.go:61] "coredns-668d6bf9bc-j8865" [3fe34cb6-97f4-49f8-8115-4325ae7bd56a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:42:03.645454   44544 system_pods.go:61] "etcd-test-preload-569778" [899dd08a-16cd-406f-ba05-64d25dd217f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:42:03.645469   44544 system_pods.go:61] "kube-apiserver-test-preload-569778" [8a7e6439-09fd-445a-96ee-f9214259f8c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:42:03.645478   44544 system_pods.go:61] "kube-controller-manager-test-preload-569778" [04a49050-fe10-4368-bfca-71db4aab9123] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:42:03.645489   44544 system_pods.go:61] "kube-proxy-l49gp" [4b9daf4d-d860-475f-88b3-5e64ca183aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 18:42:03.645498   44544 system_pods.go:61] "kube-scheduler-test-preload-569778" [b9675d44-b5ca-448b-8710-049308ebbb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:42:03.645509   44544 system_pods.go:61] "storage-provisioner" [d5140489-1c46-428b-be70-f79c5f239466] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:42:03.645519   44544 system_pods.go:74] duration metric: took 5.249916ms to wait for pod list to return data ...
	I1001 18:42:03.645535   44544 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:42:03.648750   44544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:42:03.648778   44544 node_conditions.go:123] node cpu capacity is 2
	I1001 18:42:03.648792   44544 node_conditions.go:105] duration metric: took 3.251046ms to run NodePressure ...
	I1001 18:42:03.648810   44544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:03.929713   44544 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I1001 18:42:03.933173   44544 kubeadm.go:735] kubelet initialised
	I1001 18:42:03.933195   44544 kubeadm.go:736] duration metric: took 3.456934ms waiting for restarted kubelet to initialise ...
	I1001 18:42:03.933211   44544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:42:03.950622   44544 ops.go:34] apiserver oom_adj: -16
	I1001 18:42:03.950645   44544 kubeadm.go:593] duration metric: took 8.713828833s to restartPrimaryControlPlane
	I1001 18:42:03.950655   44544 kubeadm.go:394] duration metric: took 8.764300892s to StartCluster
	I1001 18:42:03.950671   44544 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:42:03.950741   44544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:42:03.951274   44544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:42:03.951504   44544 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:42:03.951615   44544 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:42:03.951689   44544 config.go:182] Loaded profile config "test-preload-569778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1001 18:42:03.951696   44544 addons.go:69] Setting storage-provisioner=true in profile "test-preload-569778"
	I1001 18:42:03.951726   44544 addons.go:238] Setting addon storage-provisioner=true in "test-preload-569778"
	I1001 18:42:03.951728   44544 addons.go:69] Setting default-storageclass=true in profile "test-preload-569778"
	W1001 18:42:03.951736   44544 addons.go:247] addon storage-provisioner should already be in state true
	I1001 18:42:03.951749   44544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-569778"
	I1001 18:42:03.951764   44544 host.go:66] Checking if "test-preload-569778" exists ...
	I1001 18:42:03.952152   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:42:03.952185   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:42:03.952215   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:42:03.952243   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:42:03.953205   44544 out.go:179] * Verifying Kubernetes components...
	I1001 18:42:03.954590   44544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:42:03.965693   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I1001 18:42:03.966231   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:42:03.966769   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:42:03.966794   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:42:03.967112   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:42:03.967827   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:42:03.967864   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:42:03.970345   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1001 18:42:03.970750   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:42:03.971204   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:42:03.971228   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:42:03.971598   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:42:03.971811   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetState
	I1001 18:42:03.974168   44544 kapi.go:59] client config for test-preload-569778: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:42:03.974445   44544 addons.go:238] Setting addon default-storageclass=true in "test-preload-569778"
	W1001 18:42:03.974459   44544 addons.go:247] addon default-storageclass should already be in state true
	I1001 18:42:03.974487   44544 host.go:66] Checking if "test-preload-569778" exists ...
	I1001 18:42:03.974757   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:42:03.974782   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:42:03.983088   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I1001 18:42:03.983594   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:42:03.984098   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:42:03.984128   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:42:03.984515   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:42:03.984707   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetState
	I1001 18:42:03.986817   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:42:03.989086   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I1001 18:42:03.989660   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:42:03.990206   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:42:03.990232   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:42:03.990565   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:42:03.991132   44544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:42:03.991183   44544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:42:03.992568   44544 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:42:03.994124   44544 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:42:03.994157   44544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:42:03.994176   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:42:03.997554   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:42:03.998139   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:42:03.998171   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:42:03.998333   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:42:03.998508   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:42:03.998655   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:42:03.998791   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:42:04.005219   44544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
	I1001 18:42:04.005809   44544 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:42:04.006411   44544 main.go:141] libmachine: Using API Version  1
	I1001 18:42:04.006448   44544 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:42:04.006791   44544 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:42:04.006991   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetState
	I1001 18:42:04.008811   44544 main.go:141] libmachine: (test-preload-569778) Calling .DriverName
	I1001 18:42:04.009018   44544 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:42:04.009032   44544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:42:04.009044   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHHostname
	I1001 18:42:04.012225   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:42:04.012712   44544 main.go:141] libmachine: (test-preload-569778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:90:90", ip: ""} in network mk-test-preload-569778: {Iface:virbr1 ExpiryTime:2025-10-01 19:41:44 +0000 UTC Type:0 Mac:52:54:00:0f:90:90 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:test-preload-569778 Clientid:01:52:54:00:0f:90:90}
	I1001 18:42:04.012753   44544 main.go:141] libmachine: (test-preload-569778) DBG | domain test-preload-569778 has defined IP address 192.168.39.127 and MAC address 52:54:00:0f:90:90 in network mk-test-preload-569778
	I1001 18:42:04.012922   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHPort
	I1001 18:42:04.013089   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHKeyPath
	I1001 18:42:04.013245   44544 main.go:141] libmachine: (test-preload-569778) Calling .GetSSHUsername
	I1001 18:42:04.013382   44544 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/test-preload-569778/id_rsa Username:docker}
	I1001 18:42:04.186659   44544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:42:04.215570   44544 node_ready.go:35] waiting up to 6m0s for node "test-preload-569778" to be "Ready" ...
	I1001 18:42:04.225689   44544 node_ready.go:49] node "test-preload-569778" is "Ready"
	I1001 18:42:04.225726   44544 node_ready.go:38] duration metric: took 10.09095ms for node "test-preload-569778" to be "Ready" ...
	I1001 18:42:04.225743   44544 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:42:04.225805   44544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:42:04.254922   44544 api_server.go:72] duration metric: took 303.390568ms to wait for apiserver process to appear ...
	I1001 18:42:04.254946   44544 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:42:04.254974   44544 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I1001 18:42:04.262871   44544 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I1001 18:42:04.264183   44544 api_server.go:141] control plane version: v1.32.0
	I1001 18:42:04.264201   44544 api_server.go:131] duration metric: took 9.249313ms to wait for apiserver health ...
	I1001 18:42:04.264209   44544 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:42:04.270003   44544 system_pods.go:59] 7 kube-system pods found
	I1001 18:42:04.270038   44544 system_pods.go:61] "coredns-668d6bf9bc-j8865" [3fe34cb6-97f4-49f8-8115-4325ae7bd56a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:42:04.270049   44544 system_pods.go:61] "etcd-test-preload-569778" [899dd08a-16cd-406f-ba05-64d25dd217f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:42:04.270075   44544 system_pods.go:61] "kube-apiserver-test-preload-569778" [8a7e6439-09fd-445a-96ee-f9214259f8c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:42:04.270090   44544 system_pods.go:61] "kube-controller-manager-test-preload-569778" [04a49050-fe10-4368-bfca-71db4aab9123] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:42:04.270096   44544 system_pods.go:61] "kube-proxy-l49gp" [4b9daf4d-d860-475f-88b3-5e64ca183aa4] Running
	I1001 18:42:04.270103   44544 system_pods.go:61] "kube-scheduler-test-preload-569778" [b9675d44-b5ca-448b-8710-049308ebbb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:42:04.270111   44544 system_pods.go:61] "storage-provisioner" [d5140489-1c46-428b-be70-f79c5f239466] Running
	I1001 18:42:04.270120   44544 system_pods.go:74] duration metric: took 5.905355ms to wait for pod list to return data ...
	I1001 18:42:04.270132   44544 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:42:04.275235   44544 default_sa.go:45] found service account: "default"
	I1001 18:42:04.275261   44544 default_sa.go:55] duration metric: took 5.119362ms for default service account to be created ...
	I1001 18:42:04.275271   44544 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:42:04.279180   44544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:42:04.281749   44544 system_pods.go:86] 7 kube-system pods found
	I1001 18:42:04.281771   44544 system_pods.go:89] "coredns-668d6bf9bc-j8865" [3fe34cb6-97f4-49f8-8115-4325ae7bd56a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:42:04.281778   44544 system_pods.go:89] "etcd-test-preload-569778" [899dd08a-16cd-406f-ba05-64d25dd217f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:42:04.281788   44544 system_pods.go:89] "kube-apiserver-test-preload-569778" [8a7e6439-09fd-445a-96ee-f9214259f8c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:42:04.281794   44544 system_pods.go:89] "kube-controller-manager-test-preload-569778" [04a49050-fe10-4368-bfca-71db4aab9123] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:42:04.281798   44544 system_pods.go:89] "kube-proxy-l49gp" [4b9daf4d-d860-475f-88b3-5e64ca183aa4] Running
	I1001 18:42:04.281804   44544 system_pods.go:89] "kube-scheduler-test-preload-569778" [b9675d44-b5ca-448b-8710-049308ebbb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:42:04.281810   44544 system_pods.go:89] "storage-provisioner" [d5140489-1c46-428b-be70-f79c5f239466] Running
	I1001 18:42:04.281816   44544 system_pods.go:126] duration metric: took 6.53925ms to wait for k8s-apps to be running ...
	I1001 18:42:04.281826   44544 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:42:04.281862   44544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:42:04.439012   44544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:42:04.479082   44544 main.go:141] libmachine: Making call to close driver server
	I1001 18:42:04.479120   44544 system_svc.go:56] duration metric: took 197.287384ms WaitForService to wait for kubelet
	I1001 18:42:04.479144   44544 kubeadm.go:578] duration metric: took 527.614358ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:42:04.479178   44544 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:42:04.479131   44544 main.go:141] libmachine: (test-preload-569778) Calling .Close
	I1001 18:42:04.479515   44544 main.go:141] libmachine: (test-preload-569778) DBG | Closing plugin on server side
	I1001 18:42:04.479545   44544 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:42:04.479560   44544 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:42:04.479574   44544 main.go:141] libmachine: Making call to close driver server
	I1001 18:42:04.479584   44544 main.go:141] libmachine: (test-preload-569778) Calling .Close
	I1001 18:42:04.479837   44544 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:42:04.479857   44544 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:42:04.485039   44544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:42:04.485057   44544 node_conditions.go:123] node cpu capacity is 2
	I1001 18:42:04.485066   44544 node_conditions.go:105] duration metric: took 5.883651ms to run NodePressure ...
	I1001 18:42:04.485076   44544 start.go:241] waiting for startup goroutines ...
	I1001 18:42:04.489089   44544 main.go:141] libmachine: Making call to close driver server
	I1001 18:42:04.489105   44544 main.go:141] libmachine: (test-preload-569778) Calling .Close
	I1001 18:42:04.489361   44544 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:42:04.489373   44544 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:42:05.043914   44544 main.go:141] libmachine: Making call to close driver server
	I1001 18:42:05.043939   44544 main.go:141] libmachine: (test-preload-569778) Calling .Close
	I1001 18:42:05.044256   44544 main.go:141] libmachine: (test-preload-569778) DBG | Closing plugin on server side
	I1001 18:42:05.044310   44544 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:42:05.044328   44544 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:42:05.044342   44544 main.go:141] libmachine: Making call to close driver server
	I1001 18:42:05.044354   44544 main.go:141] libmachine: (test-preload-569778) Calling .Close
	I1001 18:42:05.044600   44544 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:42:05.044614   44544 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:42:05.044630   44544 main.go:141] libmachine: (test-preload-569778) DBG | Closing plugin on server side
	I1001 18:42:05.047329   44544 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1001 18:42:05.048366   44544 addons.go:514] duration metric: took 1.096751739s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 18:42:05.048399   44544 start.go:246] waiting for cluster config update ...
	I1001 18:42:05.048412   44544 start.go:255] writing updated cluster config ...
	I1001 18:42:05.048770   44544 ssh_runner.go:195] Run: rm -f paused
	I1001 18:42:05.054938   44544 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:42:05.055387   44544 kapi.go:59] client config for test-preload-569778: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/test-preload-569778/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:42:05.058147   44544 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-j8865" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:42:07.067573   44544 pod_ready.go:104] pod "coredns-668d6bf9bc-j8865" is not "Ready", error: <nil>
	I1001 18:42:09.564578   44544 pod_ready.go:94] pod "coredns-668d6bf9bc-j8865" is "Ready"
	I1001 18:42:09.564604   44544 pod_ready.go:86] duration metric: took 4.506439009s for pod "coredns-668d6bf9bc-j8865" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:09.567035   44544 pod_ready.go:83] waiting for pod "etcd-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:11.074334   44544 pod_ready.go:94] pod "etcd-test-preload-569778" is "Ready"
	I1001 18:42:11.074369   44544 pod_ready.go:86] duration metric: took 1.507313627s for pod "etcd-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:11.077976   44544 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:42:13.084757   44544 pod_ready.go:104] pod "kube-apiserver-test-preload-569778" is not "Ready", error: <nil>
	W1001 18:42:15.084805   44544 pod_ready.go:104] pod "kube-apiserver-test-preload-569778" is not "Ready", error: <nil>
	I1001 18:42:16.084122   44544 pod_ready.go:94] pod "kube-apiserver-test-preload-569778" is "Ready"
	I1001 18:42:16.084160   44544 pod_ready.go:86] duration metric: took 5.006156805s for pod "kube-apiserver-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.085958   44544 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.091106   44544 pod_ready.go:94] pod "kube-controller-manager-test-preload-569778" is "Ready"
	I1001 18:42:16.091129   44544 pod_ready.go:86] duration metric: took 5.145332ms for pod "kube-controller-manager-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.092971   44544 pod_ready.go:83] waiting for pod "kube-proxy-l49gp" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.096850   44544 pod_ready.go:94] pod "kube-proxy-l49gp" is "Ready"
	I1001 18:42:16.096868   44544 pod_ready.go:86] duration metric: took 3.878436ms for pod "kube-proxy-l49gp" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.099289   44544 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.282839   44544 pod_ready.go:94] pod "kube-scheduler-test-preload-569778" is "Ready"
	I1001 18:42:16.282865   44544 pod_ready.go:86] duration metric: took 183.558624ms for pod "kube-scheduler-test-preload-569778" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:42:16.282877   44544 pod_ready.go:40] duration metric: took 11.227913433s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:42:16.326365   44544 start.go:620] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1001 18:42:16.327966   44544 out.go:203] 
	W1001 18:42:16.329246   44544 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1001 18:42:16.330458   44544 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1001 18:42:16.331645   44544 out.go:179] * Done! kubectl is now configured to use "test-preload-569778" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.229110479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137229076318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=585a9aa7-1199-40d4-9632-8baacfa5ab3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.230013516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adefa6c8-15e2-40a4-a9f9-3a678b686375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.230083185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adefa6c8-15e2-40a4-a9f9-3a678b686375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.230297516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfc6534a437da444dc50148df2fe85a34e3a55ee9a2a84c33c1d7987b9adbf1c,PodSandboxId:5691c8d3f7aedb8a58b9fd30fbedb89c2bfb2ad5c1310f10f23cba4f768c0ab7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759344126777057014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j8865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe34cb6-97f4-49f8-8115-4325ae7bd56a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af845fa4cd325989ca9aa6c4ea27025f0a793b1062b1bab0928c1bbb19d392fd,PodSandboxId:e207c204144d8918cd02d28cdfd070105c228d840ee69f5b59f26abe5fa92e20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759344123442565093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l49gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4b9daf4d-d860-475f-88b3-5e64ca183aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817c3c4408998dfdef579c5589f097a15d5ab82f5a00965f2d5648bd622308cd,PodSandboxId:c82bc2b8543c78cfdabfe71c1ee92eac2c45c16d1de7ebff1bdd8a0b79b297ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759344123466491419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
140489-1c46-428b-be70-f79c5f239466,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0048092da19b741094995db8961bf258f4057fdf16de62daf54f2e613834a6b2,PodSandboxId:46754dc9cb579807a0676df124545198c60b8c962138abae5b644b00202c5645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759344119248322176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b370164
2417ecec24295ababf04adc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b602f0fd5e1e64a1242aa6335fef6049b6a6c0250b9f4a1122e41f053e26ed7,PodSandboxId:841baa95250cb312ba80da62ea3fa83558ef3c13afba9030e076377c92fb6034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759344119185528725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79c232d8c1b000e21d9
e3fda0c9d581,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12819038e2a817bf2a2165e0a22d626d4a7cbda888f8986205d64638676e5098,PodSandboxId:712e78f5e3abda00d4e59dbe8f1029d7e98ffdbd8ad623dd26a45602a6a46b90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759344119182242853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a0682e688f00cbc4376753834fbc13,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36eae02f1ef43f42b1609cc8b41437766686f9157ba569ed44a51927715b56a5,PodSandboxId:1ebe8e7d3d87abbff48bf68cdebae05f7d6dea76ac5133959cbd6da5041a7301,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759344119148544090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53ff0f6cc7ad794cdf5afe9cb8f4bc93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adefa6c8-15e2-40a4-a9f9-3a678b686375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.268716671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6674463b-93a2-4452-ac42-768b5faeaa72 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.268823967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6674463b-93a2-4452-ac42-768b5faeaa72 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.269771347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9449b477-70fc-4696-b185-a4dc429c66c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.270274545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137270251006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9449b477-70fc-4696-b185-a4dc429c66c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.270857555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68e2140c-4b0f-4db4-927c-8da888075da0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.271052917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68e2140c-4b0f-4db4-927c-8da888075da0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.271536217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfc6534a437da444dc50148df2fe85a34e3a55ee9a2a84c33c1d7987b9adbf1c,PodSandboxId:5691c8d3f7aedb8a58b9fd30fbedb89c2bfb2ad5c1310f10f23cba4f768c0ab7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759344126777057014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j8865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe34cb6-97f4-49f8-8115-4325ae7bd56a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af845fa4cd325989ca9aa6c4ea27025f0a793b1062b1bab0928c1bbb19d392fd,PodSandboxId:e207c204144d8918cd02d28cdfd070105c228d840ee69f5b59f26abe5fa92e20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759344123442565093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l49gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4b9daf4d-d860-475f-88b3-5e64ca183aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817c3c4408998dfdef579c5589f097a15d5ab82f5a00965f2d5648bd622308cd,PodSandboxId:c82bc2b8543c78cfdabfe71c1ee92eac2c45c16d1de7ebff1bdd8a0b79b297ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759344123466491419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
140489-1c46-428b-be70-f79c5f239466,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0048092da19b741094995db8961bf258f4057fdf16de62daf54f2e613834a6b2,PodSandboxId:46754dc9cb579807a0676df124545198c60b8c962138abae5b644b00202c5645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759344119248322176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b370164
2417ecec24295ababf04adc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b602f0fd5e1e64a1242aa6335fef6049b6a6c0250b9f4a1122e41f053e26ed7,PodSandboxId:841baa95250cb312ba80da62ea3fa83558ef3c13afba9030e076377c92fb6034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759344119185528725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79c232d8c1b000e21d9
e3fda0c9d581,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12819038e2a817bf2a2165e0a22d626d4a7cbda888f8986205d64638676e5098,PodSandboxId:712e78f5e3abda00d4e59dbe8f1029d7e98ffdbd8ad623dd26a45602a6a46b90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759344119182242853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a0682e688f00cbc4376753834fbc13,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36eae02f1ef43f42b1609cc8b41437766686f9157ba569ed44a51927715b56a5,PodSandboxId:1ebe8e7d3d87abbff48bf68cdebae05f7d6dea76ac5133959cbd6da5041a7301,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759344119148544090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53ff0f6cc7ad794cdf5afe9cb8f4bc93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68e2140c-4b0f-4db4-927c-8da888075da0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.310133630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=678e8769-e418-434c-a898-288613c8446c name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.310203089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=678e8769-e418-434c-a898-288613c8446c name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.311440612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=410df495-414a-4166-b290-127e8e2b6575 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.311873973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137311852410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=410df495-414a-4166-b290-127e8e2b6575 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.313536305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0f120f3-8237-4199-a030-b263e6f21d01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.313586593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0f120f3-8237-4199-a030-b263e6f21d01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.313742515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfc6534a437da444dc50148df2fe85a34e3a55ee9a2a84c33c1d7987b9adbf1c,PodSandboxId:5691c8d3f7aedb8a58b9fd30fbedb89c2bfb2ad5c1310f10f23cba4f768c0ab7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759344126777057014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j8865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe34cb6-97f4-49f8-8115-4325ae7bd56a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af845fa4cd325989ca9aa6c4ea27025f0a793b1062b1bab0928c1bbb19d392fd,PodSandboxId:e207c204144d8918cd02d28cdfd070105c228d840ee69f5b59f26abe5fa92e20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759344123442565093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l49gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4b9daf4d-d860-475f-88b3-5e64ca183aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817c3c4408998dfdef579c5589f097a15d5ab82f5a00965f2d5648bd622308cd,PodSandboxId:c82bc2b8543c78cfdabfe71c1ee92eac2c45c16d1de7ebff1bdd8a0b79b297ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759344123466491419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
140489-1c46-428b-be70-f79c5f239466,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0048092da19b741094995db8961bf258f4057fdf16de62daf54f2e613834a6b2,PodSandboxId:46754dc9cb579807a0676df124545198c60b8c962138abae5b644b00202c5645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759344119248322176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b370164
2417ecec24295ababf04adc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b602f0fd5e1e64a1242aa6335fef6049b6a6c0250b9f4a1122e41f053e26ed7,PodSandboxId:841baa95250cb312ba80da62ea3fa83558ef3c13afba9030e076377c92fb6034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759344119185528725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79c232d8c1b000e21d9
e3fda0c9d581,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12819038e2a817bf2a2165e0a22d626d4a7cbda888f8986205d64638676e5098,PodSandboxId:712e78f5e3abda00d4e59dbe8f1029d7e98ffdbd8ad623dd26a45602a6a46b90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759344119182242853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a0682e688f00cbc4376753834fbc13,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36eae02f1ef43f42b1609cc8b41437766686f9157ba569ed44a51927715b56a5,PodSandboxId:1ebe8e7d3d87abbff48bf68cdebae05f7d6dea76ac5133959cbd6da5041a7301,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759344119148544090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53ff0f6cc7ad794cdf5afe9cb8f4bc93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0f120f3-8237-4199-a030-b263e6f21d01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.353333887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c9d5d2c-703a-4f85-855c-224ab89febc7 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.353435384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c9d5d2c-703a-4f85-855c-224ab89febc7 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.355418976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=104ec9e7-ac4d-4e88-a3df-b25d0418b76c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.355877533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137355816873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=104ec9e7-ac4d-4e88-a3df-b25d0418b76c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.356466884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d74078f-8a78-402c-b7ac-e72338f39d19 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.356779629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d74078f-8a78-402c-b7ac-e72338f39d19 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:42:17 test-preload-569778 crio[834]: time="2025-10-01 18:42:17.357153646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfc6534a437da444dc50148df2fe85a34e3a55ee9a2a84c33c1d7987b9adbf1c,PodSandboxId:5691c8d3f7aedb8a58b9fd30fbedb89c2bfb2ad5c1310f10f23cba4f768c0ab7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759344126777057014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-j8865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe34cb6-97f4-49f8-8115-4325ae7bd56a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af845fa4cd325989ca9aa6c4ea27025f0a793b1062b1bab0928c1bbb19d392fd,PodSandboxId:e207c204144d8918cd02d28cdfd070105c228d840ee69f5b59f26abe5fa92e20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759344123442565093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l49gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4b9daf4d-d860-475f-88b3-5e64ca183aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817c3c4408998dfdef579c5589f097a15d5ab82f5a00965f2d5648bd622308cd,PodSandboxId:c82bc2b8543c78cfdabfe71c1ee92eac2c45c16d1de7ebff1bdd8a0b79b297ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759344123466491419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5
140489-1c46-428b-be70-f79c5f239466,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0048092da19b741094995db8961bf258f4057fdf16de62daf54f2e613834a6b2,PodSandboxId:46754dc9cb579807a0676df124545198c60b8c962138abae5b644b00202c5645,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759344119248322176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b370164
2417ecec24295ababf04adc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b602f0fd5e1e64a1242aa6335fef6049b6a6c0250b9f4a1122e41f053e26ed7,PodSandboxId:841baa95250cb312ba80da62ea3fa83558ef3c13afba9030e076377c92fb6034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759344119185528725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79c232d8c1b000e21d9
e3fda0c9d581,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12819038e2a817bf2a2165e0a22d626d4a7cbda888f8986205d64638676e5098,PodSandboxId:712e78f5e3abda00d4e59dbe8f1029d7e98ffdbd8ad623dd26a45602a6a46b90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759344119182242853,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a0682e688f00cbc4376753834fbc13,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36eae02f1ef43f42b1609cc8b41437766686f9157ba569ed44a51927715b56a5,PodSandboxId:1ebe8e7d3d87abbff48bf68cdebae05f7d6dea76ac5133959cbd6da5041a7301,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759344119148544090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-569778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53ff0f6cc7ad794cdf5afe9cb8f4bc93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d74078f-8a78-402c-b7ac-e72338f39d19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dfc6534a437da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   1                   5691c8d3f7aed       coredns-668d6bf9bc-j8865
	817c3c4408998       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       2                   c82bc2b8543c7       storage-provisioner
	af845fa4cd325       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   13 seconds ago      Running             kube-proxy                1                   e207c204144d8       kube-proxy-l49gp
	0048092da19b7       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   46754dc9cb579       kube-scheduler-test-preload-569778
	3b602f0fd5e1e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   841baa95250cb       kube-apiserver-test-preload-569778
	12819038e2a81       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   712e78f5e3abd       etcd-test-preload-569778
	36eae02f1ef43       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   1ebe8e7d3d87a       kube-controller-manager-test-preload-569778
	
	
	==> coredns [dfc6534a437da444dc50148df2fe85a34e3a55ee9a2a84c33c1d7987b9adbf1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56997 - 15887 "HINFO IN 8130511317537320493.5334195701978508747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119637931s
	
	
	==> describe nodes <==
	Name:               test-preload-569778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-569778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=test-preload-569778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T18_40_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 18:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-569778
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 18:42:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 18:42:03 +0000   Wed, 01 Oct 2025 18:40:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 18:42:03 +0000   Wed, 01 Oct 2025 18:40:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 18:42:03 +0000   Wed, 01 Oct 2025 18:40:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 18:42:03 +0000   Wed, 01 Oct 2025 18:42:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    test-preload-569778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 828030527172447b95378746783776e4
	  System UUID:                82803052-7172-447b-9537-8746783776e4
	  Boot ID:                    52e4de29-620e-4789-8417-fa2f1ab3ad41
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-j8865                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-test-preload-569778                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-569778             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-569778    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-l49gp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-test-preload-569778             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 104s               kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node test-preload-569778 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node test-preload-569778 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node test-preload-569778 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Normal   NodeReady                110s               kubelet          Node test-preload-569778 status is now: NodeReady
	  Normal   RegisteredNode           107s               node-controller  Node test-preload-569778 event: Registered Node test-preload-569778 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-569778 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-569778 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-569778 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-569778 has been rebooted, boot id: 52e4de29-620e-4789-8417-fa2f1ab3ad41
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-569778 event: Registered Node test-preload-569778 in Controller
	
	
	==> dmesg <==
	[Oct 1 18:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002214] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.996260] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087876] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.097286] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 1 18:42] kauditd_printk_skb: 177 callbacks suppressed
	[  +2.550829] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [12819038e2a817bf2a2165e0a22d626d4a7cbda888f8986205d64638676e5098] <==
	{"level":"info","ts":"2025-10-01T18:41:59.646521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2025-10-01T18:41:59.646589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2025-10-01T18:41:59.646692Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-01T18:41:59.646728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-01T18:41:59.661152Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-01T18:41:59.661441Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-01T18:41:59.661484Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-01T18:41:59.661566Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2025-10-01T18:41:59.661587Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2025-10-01T18:42:00.805079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-01T18:42:00.805138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-01T18:42:00.805172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2025-10-01T18:42:00.805185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2025-10-01T18:42:00.805198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2025-10-01T18:42:00.805207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2025-10-01T18:42:00.805228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2025-10-01T18:42:00.810632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-01T18:42:00.811447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-01T18:42:00.810584Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:test-preload-569778 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-01T18:42:00.812146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-01T18:42:00.812400Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-01T18:42:00.812443Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-01T18:42:00.812449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-01T18:42:00.814114Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-01T18:42:00.814701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	
	
	==> kernel <==
	 18:42:17 up 0 min,  0 users,  load average: 0.48, 0.12, 0.04
	Linux test-preload-569778 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3b602f0fd5e1e64a1242aa6335fef6049b6a6c0250b9f4a1122e41f053e26ed7] <==
	I1001 18:42:02.145486       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 18:42:02.146433       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 18:42:02.146463       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 18:42:02.148807       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 18:42:02.164383       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 18:42:02.173296       1 shared_informer.go:320] Caches are synced for configmaps
	I1001 18:42:02.173327       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1001 18:42:02.178078       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 18:42:02.178114       1 aggregator.go:171] initial CRD sync complete...
	I1001 18:42:02.178121       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 18:42:02.178125       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 18:42:02.178129       1 cache.go:39] Caches are synced for autoregister controller
	I1001 18:42:02.183729       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 18:42:02.204088       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 18:42:02.204124       1 policy_source.go:240] refreshing policies
	I1001 18:42:02.230170       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 18:42:03.014975       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 18:42:03.053662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 18:42:03.764588       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 18:42:03.806233       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 18:42:03.835615       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 18:42:03.841836       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 18:42:05.361936       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 18:42:05.656876       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 18:42:05.759581       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [36eae02f1ef43f42b1609cc8b41437766686f9157ba569ed44a51927715b56a5] <==
	I1001 18:42:05.361391       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 18:42:05.359704       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1001 18:42:05.359724       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1001 18:42:05.363656       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1001 18:42:05.363730       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 18:42:05.364863       1 shared_informer.go:320] Caches are synced for namespace
	I1001 18:42:05.365139       1 shared_informer.go:320] Caches are synced for PVC protection
	I1001 18:42:05.367850       1 shared_informer.go:320] Caches are synced for PV protection
	I1001 18:42:05.368960       1 shared_informer.go:320] Caches are synced for node
	I1001 18:42:05.368990       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1001 18:42:05.369019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1001 18:42:05.369023       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1001 18:42:05.369027       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1001 18:42:05.369086       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-569778"
	I1001 18:42:05.382976       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 18:42:05.388253       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 18:42:05.388403       1 shared_informer.go:320] Caches are synced for attach detach
	I1001 18:42:05.392617       1 shared_informer.go:320] Caches are synced for service account
	I1001 18:42:05.405440       1 shared_informer.go:320] Caches are synced for HPA
	I1001 18:42:05.407628       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1001 18:42:05.770054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="407.811334ms"
	I1001 18:42:05.770439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="182.872µs"
	I1001 18:42:07.162499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.597µs"
	I1001 18:42:09.165166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.674248ms"
	I1001 18:42:09.166372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="176.806µs"
	
	
	==> kube-proxy [af845fa4cd325989ca9aa6c4ea27025f0a793b1062b1bab0928c1bbb19d392fd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 18:42:03.683276       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 18:42:03.701161       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.127"]
	E1001 18:42:03.701290       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:42:03.766847       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1001 18:42:03.766886       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:42:03.766993       1 server_linux.go:170] "Using iptables Proxier"
	I1001 18:42:03.774291       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:42:03.774873       1 server.go:497] "Version info" version="v1.32.0"
	I1001 18:42:03.774978       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:42:03.778303       1 config.go:199] "Starting service config controller"
	I1001 18:42:03.778383       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 18:42:03.778423       1 config.go:105] "Starting endpoint slice config controller"
	I1001 18:42:03.778486       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 18:42:03.780955       1 config.go:329] "Starting node config controller"
	I1001 18:42:03.780987       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 18:42:03.878655       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 18:42:03.878679       1 shared_informer.go:320] Caches are synced for service config
	I1001 18:42:03.881709       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0048092da19b741094995db8961bf258f4057fdf16de62daf54f2e613834a6b2] <==
	I1001 18:42:00.529259       1 serving.go:386] Generated self-signed cert in-memory
	I1001 18:42:02.185089       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1001 18:42:02.185125       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:42:02.194207       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1001 18:42:02.194340       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1001 18:42:02.194381       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 18:42:02.194416       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:02.194456       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 18:42:02.194471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:42:02.194477       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1001 18:42:02.194481       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:42:02.294557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1001 18:42:02.294742       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1001 18:42:02.294757       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.297294    1157 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-569778"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.304509    1157 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-569778"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.304604    1157 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-569778"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.304628    1157 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.305997    1157 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: E1001 18:42:02.306392    1157 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-569778\" already exists" pod="kube-system/etcd-test-preload-569778"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.308057    1157 setters.go:602] "Node became not ready" node="test-preload-569778" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T18:42:02Z","lastTransitionTime":"2025-10-01T18:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: I1001 18:42:02.974808    1157 apiserver.go:52] "Watching apiserver"
	Oct 01 18:42:02 test-preload-569778 kubelet[1157]: E1001 18:42:02.978550    1157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-j8865" podUID="3fe34cb6-97f4-49f8-8115-4325ae7bd56a"
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: I1001 18:42:03.002645    1157 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: I1001 18:42:03.004101    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5140489-1c46-428b-be70-f79c5f239466-tmp\") pod \"storage-provisioner\" (UID: \"d5140489-1c46-428b-be70-f79c5f239466\") " pod="kube-system/storage-provisioner"
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: I1001 18:42:03.004150    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b9daf4d-d860-475f-88b3-5e64ca183aa4-lib-modules\") pod \"kube-proxy-l49gp\" (UID: \"4b9daf4d-d860-475f-88b3-5e64ca183aa4\") " pod="kube-system/kube-proxy-l49gp"
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: I1001 18:42:03.004166    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b9daf4d-d860-475f-88b3-5e64ca183aa4-xtables-lock\") pod \"kube-proxy-l49gp\" (UID: \"4b9daf4d-d860-475f-88b3-5e64ca183aa4\") " pod="kube-system/kube-proxy-l49gp"
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: E1001 18:42:03.004341    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: E1001 18:42:03.004418    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume podName:3fe34cb6-97f4-49f8-8115-4325ae7bd56a nodeName:}" failed. No retries permitted until 2025-10-01 18:42:03.50439074 +0000 UTC m=+6.624271573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume") pod "coredns-668d6bf9bc-j8865" (UID: "3fe34cb6-97f4-49f8-8115-4325ae7bd56a") : object "kube-system"/"coredns" not registered
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: E1001 18:42:03.514575    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: E1001 18:42:03.514646    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume podName:3fe34cb6-97f4-49f8-8115-4325ae7bd56a nodeName:}" failed. No retries permitted until 2025-10-01 18:42:04.51463303 +0000 UTC m=+7.634513852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume") pod "coredns-668d6bf9bc-j8865" (UID: "3fe34cb6-97f4-49f8-8115-4325ae7bd56a") : object "kube-system"/"coredns" not registered
	Oct 01 18:42:03 test-preload-569778 kubelet[1157]: I1001 18:42:03.979229    1157 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 01 18:42:04 test-preload-569778 kubelet[1157]: E1001 18:42:04.522805    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 18:42:04 test-preload-569778 kubelet[1157]: E1001 18:42:04.522863    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume podName:3fe34cb6-97f4-49f8-8115-4325ae7bd56a nodeName:}" failed. No retries permitted until 2025-10-01 18:42:06.522851613 +0000 UTC m=+9.642732434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3fe34cb6-97f4-49f8-8115-4325ae7bd56a-config-volume") pod "coredns-668d6bf9bc-j8865" (UID: "3fe34cb6-97f4-49f8-8115-4325ae7bd56a") : object "kube-system"/"coredns" not registered
	Oct 01 18:42:07 test-preload-569778 kubelet[1157]: E1001 18:42:07.051112    1157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344127050593366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 18:42:07 test-preload-569778 kubelet[1157]: E1001 18:42:07.051515    1157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344127050593366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 18:42:09 test-preload-569778 kubelet[1157]: I1001 18:42:09.131883    1157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 01 18:42:17 test-preload-569778 kubelet[1157]: E1001 18:42:17.060087    1157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137058020669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 18:42:17 test-preload-569778 kubelet[1157]: E1001 18:42:17.060126    1157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344137058020669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [817c3c4408998dfdef579c5589f097a15d5ab82f5a00965f2d5648bd622308cd] <==
	I1001 18:42:03.580368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-569778 -n test-preload-569778
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-569778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-569778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-569778
--- FAIL: TestPreload (163.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145303 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-145303 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.937759425s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-145303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-145303" primary control-plane node in "pause-145303" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-145303" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:45:26.367419   47159 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:45:26.367775   47159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:45:26.367790   47159 out.go:374] Setting ErrFile to fd 2...
	I1001 18:45:26.367797   47159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:45:26.368140   47159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:45:26.368685   47159 out.go:368] Setting JSON to false
	I1001 18:45:26.369766   47159 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5270,"bootTime":1759339056,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:45:26.369897   47159 start.go:140] virtualization: kvm guest
	I1001 18:45:26.371841   47159 out.go:179] * [pause-145303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:45:26.373395   47159 notify.go:220] Checking for updates...
	I1001 18:45:26.373449   47159 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:45:26.375144   47159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:45:26.376321   47159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:45:26.377542   47159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:45:26.378681   47159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:45:26.380002   47159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:45:26.381929   47159 config.go:182] Loaded profile config "pause-145303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:45:26.382358   47159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:45:26.382439   47159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:45:26.399771   47159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I1001 18:45:26.400305   47159 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:45:26.400801   47159 main.go:141] libmachine: Using API Version  1
	I1001 18:45:26.400831   47159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:45:26.401251   47159 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:45:26.401482   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:26.401746   47159 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:45:26.402073   47159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:45:26.402112   47159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:45:26.416616   47159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I1001 18:45:26.417175   47159 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:45:26.417713   47159 main.go:141] libmachine: Using API Version  1
	I1001 18:45:26.417741   47159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:45:26.418101   47159 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:45:26.418301   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:26.453283   47159 out.go:179] * Using the kvm2 driver based on existing profile
	I1001 18:45:26.454492   47159 start.go:304] selected driver: kvm2
	I1001 18:45:26.454513   47159 start.go:921] validating driver "kvm2" against &{Name:pause-145303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterN
ame:pause-145303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-de
vice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:45:26.454715   47159 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:45:26.455191   47159 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:45:26.455286   47159 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:45:26.470094   47159 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:45:26.470125   47159 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:45:26.484626   47159 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:45:26.485444   47159 cni.go:84] Creating CNI manager for ""
	I1001 18:45:26.485520   47159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:45:26.485585   47159 start.go:348] cluster config:
	{Name:pause-145303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-145303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry
:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:45:26.485743   47159 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:45:26.487559   47159 out.go:179] * Starting "pause-145303" primary control-plane node in "pause-145303" cluster
	I1001 18:45:26.488558   47159 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:45:26.488612   47159 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:45:26.488623   47159 cache.go:58] Caching tarball of preloaded images
	I1001 18:45:26.488761   47159 preload.go:233] Found /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 18:45:26.488776   47159 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 18:45:26.488930   47159 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/config.json ...
	I1001 18:45:26.489144   47159 start.go:360] acquireMachinesLock for pause-145303: {Name:mk9cde4a6dd309a36e894aa2ddacad5574ffdbe7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:45:46.704767   47159 start.go:364] duration metric: took 20.215590962s to acquireMachinesLock for "pause-145303"
	I1001 18:45:46.704830   47159 start.go:96] Skipping create...Using existing machine configuration
	I1001 18:45:46.704841   47159 fix.go:54] fixHost starting: 
	I1001 18:45:46.705319   47159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:45:46.705373   47159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:45:46.724173   47159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1001 18:45:46.724770   47159 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:45:46.725346   47159 main.go:141] libmachine: Using API Version  1
	I1001 18:45:46.725371   47159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:45:46.725726   47159 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:45:46.725924   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:46.726085   47159 main.go:141] libmachine: (pause-145303) Calling .GetState
	I1001 18:45:46.728029   47159 fix.go:112] recreateIfNeeded on pause-145303: state=Running err=<nil>
	W1001 18:45:46.728053   47159 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 18:45:46.729723   47159 out.go:252] * Updating the running kvm2 "pause-145303" VM ...
	I1001 18:45:46.729754   47159 machine.go:93] provisionDockerMachine start ...
	I1001 18:45:46.729770   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:46.729962   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:46.732975   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.733418   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:46.733465   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.733659   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:46.733838   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.733996   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.734143   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:46.734324   47159 main.go:141] libmachine: Using SSH client type: native
	I1001 18:45:46.734656   47159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1001 18:45:46.734673   47159 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 18:45:46.847336   47159 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145303
	
	I1001 18:45:46.847367   47159 main.go:141] libmachine: (pause-145303) Calling .GetMachineName
	I1001 18:45:46.847658   47159 buildroot.go:166] provisioning hostname "pause-145303"
	I1001 18:45:46.847686   47159 main.go:141] libmachine: (pause-145303) Calling .GetMachineName
	I1001 18:45:46.847893   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:46.851043   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.851525   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:46.851581   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.851880   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:46.852097   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.852286   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.852471   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:46.852653   47159 main.go:141] libmachine: Using SSH client type: native
	I1001 18:45:46.852881   47159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1001 18:45:46.852897   47159 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-145303 && echo "pause-145303" | sudo tee /etc/hostname
	I1001 18:45:46.983450   47159 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145303
	
	I1001 18:45:46.983482   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:46.986514   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.986919   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:46.986957   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:46.987197   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:46.987419   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.987605   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:46.987745   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:46.987926   47159 main.go:141] libmachine: Using SSH client type: native
	I1001 18:45:46.988126   47159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1001 18:45:46.988143   47159 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-145303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-145303/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-145303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:45:47.099835   47159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:45:47.099870   47159 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 18:45:47.099897   47159 buildroot.go:174] setting up certificates
	I1001 18:45:47.099913   47159 provision.go:84] configureAuth start
	I1001 18:45:47.099927   47159 main.go:141] libmachine: (pause-145303) Calling .GetMachineName
	I1001 18:45:47.100289   47159 main.go:141] libmachine: (pause-145303) Calling .GetIP
	I1001 18:45:47.104022   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.104504   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:47.104547   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.104668   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:47.107587   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.108021   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:47.108062   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.108311   47159 provision.go:143] copyHostCerts
	I1001 18:45:47.108387   47159 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem, removing ...
	I1001 18:45:47.108419   47159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:45:47.108513   47159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 18:45:47.108705   47159 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem, removing ...
	I1001 18:45:47.108717   47159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:45:47.108746   47159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 18:45:47.108818   47159 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem, removing ...
	I1001 18:45:47.108825   47159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:45:47.108850   47159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 18:45:47.108913   47159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.pause-145303 san=[127.0.0.1 192.168.39.100 localhost minikube pause-145303]
	I1001 18:45:47.267786   47159 provision.go:177] copyRemoteCerts
	I1001 18:45:47.267862   47159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:45:47.267909   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:47.271525   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.271995   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:47.272022   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.272274   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:47.272463   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:47.272678   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:47.272840   47159 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/pause-145303/id_rsa Username:docker}
	I1001 18:45:47.361066   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:45:47.398906   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 18:45:47.434508   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:45:47.471575   47159 provision.go:87] duration metric: took 371.645488ms to configureAuth
	I1001 18:45:47.471606   47159 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:45:47.471803   47159 config.go:182] Loaded profile config "pause-145303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:45:47.471881   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:47.475610   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.476055   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:47.476102   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:47.476383   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:47.476629   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:47.476814   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:47.476979   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:47.477118   47159 main.go:141] libmachine: Using SSH client type: native
	I1001 18:45:47.477363   47159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1001 18:45:47.477377   47159 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:45:53.035855   47159 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:45:53.035887   47159 machine.go:96] duration metric: took 6.306126008s to provisionDockerMachine
	I1001 18:45:53.035900   47159 start.go:293] postStartSetup for "pause-145303" (driver="kvm2")
	I1001 18:45:53.035912   47159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:45:53.035934   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:53.036283   47159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:45:53.036315   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:53.039827   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.040253   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:53.040279   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.040513   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:53.040714   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:53.040866   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:53.041009   47159 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/pause-145303/id_rsa Username:docker}
	I1001 18:45:53.126150   47159 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:45:53.130940   47159 info.go:137] Remote host: Buildroot 2025.02
	I1001 18:45:53.130967   47159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 18:45:53.131048   47159 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 18:45:53.131184   47159 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> 134692.pem in /etc/ssl/certs
	I1001 18:45:53.131282   47159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 18:45:53.143133   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:45:53.175991   47159 start.go:296] duration metric: took 140.076208ms for postStartSetup
	I1001 18:45:53.176030   47159 fix.go:56] duration metric: took 6.471189851s for fixHost
	I1001 18:45:53.176052   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:53.179235   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.179663   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:53.179699   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.179917   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:53.180164   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:53.180329   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:53.180496   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:53.180668   47159 main.go:141] libmachine: Using SSH client type: native
	I1001 18:45:53.180858   47159 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1001 18:45:53.180868   47159 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:45:53.285707   47159 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759344353.281004060
	
	I1001 18:45:53.285732   47159 fix.go:216] guest clock: 1759344353.281004060
	I1001 18:45:53.285742   47159 fix.go:229] Guest: 2025-10-01 18:45:53.28100406 +0000 UTC Remote: 2025-10-01 18:45:53.176034834 +0000 UTC m=+26.859400045 (delta=104.969226ms)
	I1001 18:45:53.285768   47159 fix.go:200] guest clock delta is within tolerance: 104.969226ms
	I1001 18:45:53.285774   47159 start.go:83] releasing machines lock for "pause-145303", held for 6.580968022s
	I1001 18:45:53.285804   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:53.286118   47159 main.go:141] libmachine: (pause-145303) Calling .GetIP
	I1001 18:45:53.289629   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.290104   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:53.290136   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.290326   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:53.290859   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:53.291045   47159 main.go:141] libmachine: (pause-145303) Calling .DriverName
	I1001 18:45:53.291149   47159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:45:53.291201   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:53.291286   47159 ssh_runner.go:195] Run: cat /version.json
	I1001 18:45:53.291311   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHHostname
	I1001 18:45:53.294397   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.294658   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.294838   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:53.294858   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.295012   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:53.295189   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:53.295221   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:53.295246   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:53.295357   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:53.295447   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHPort
	I1001 18:45:53.295541   47159 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/pause-145303/id_rsa Username:docker}
	I1001 18:45:53.295611   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHKeyPath
	I1001 18:45:53.295788   47159 main.go:141] libmachine: (pause-145303) Calling .GetSSHUsername
	I1001 18:45:53.295935   47159 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/pause-145303/id_rsa Username:docker}
	I1001 18:45:53.406143   47159 ssh_runner.go:195] Run: systemctl --version
	I1001 18:45:53.412787   47159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:45:53.566098   47159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:45:53.575911   47159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:45:53.575996   47159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:45:53.590166   47159 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 18:45:53.590190   47159 start.go:495] detecting cgroup driver to use...
	I1001 18:45:53.590250   47159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:45:53.612955   47159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:45:53.633923   47159 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:45:53.633992   47159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:45:53.655975   47159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:45:53.672545   47159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:45:53.879595   47159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:45:54.062932   47159 docker.go:234] disabling docker service ...
	I1001 18:45:54.063011   47159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:45:54.102735   47159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:45:54.127594   47159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:45:54.340997   47159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:45:54.537385   47159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:45:54.557769   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:45:54.584917   47159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1001 18:45:54.584983   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.604223   47159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:45:54.604349   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.619608   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.635405   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.650644   47159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:45:54.665676   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.681988   47159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.697573   47159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:45:54.711769   47159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:45:54.725168   47159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:45:54.738831   47159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:45:54.946230   47159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:45:56.395476   47159 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449208387s)
	I1001 18:45:56.395512   47159 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:45:56.395566   47159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:45:56.402065   47159 start.go:563] Will wait 60s for crictl version
	I1001 18:45:56.402141   47159 ssh_runner.go:195] Run: which crictl
	I1001 18:45:56.406910   47159 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:45:56.459299   47159 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:45:56.459390   47159 ssh_runner.go:195] Run: crio --version
	I1001 18:45:56.495682   47159 ssh_runner.go:195] Run: crio --version
	I1001 18:45:56.527499   47159 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1001 18:45:56.528656   47159 main.go:141] libmachine: (pause-145303) Calling .GetIP
	I1001 18:45:56.531581   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:56.531968   47159 main.go:141] libmachine: (pause-145303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:06:39", ip: ""} in network mk-pause-145303: {Iface:virbr1 ExpiryTime:2025-10-01 19:44:21 +0000 UTC Type:0 Mac:52:54:00:bd:06:39 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-145303 Clientid:01:52:54:00:bd:06:39}
	I1001 18:45:56.531994   47159 main.go:141] libmachine: (pause-145303) DBG | domain pause-145303 has defined IP address 192.168.39.100 and MAC address 52:54:00:bd:06:39 in network mk-pause-145303
	I1001 18:45:56.532248   47159 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 18:45:56.536754   47159 kubeadm.go:875] updating cluster {Name:pause-145303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-14530
3 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fal
se olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:45:56.536931   47159 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:45:56.537004   47159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:45:56.584184   47159 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:45:56.584211   47159 crio.go:433] Images already preloaded, skipping extraction
	I1001 18:45:56.584265   47159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:45:56.619828   47159 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:45:56.619853   47159 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:45:56.619862   47159 kubeadm.go:926] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1001 18:45:56.619977   47159 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-145303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-145303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:45:56.620059   47159 ssh_runner.go:195] Run: crio config
	I1001 18:45:56.699890   47159 cni.go:84] Creating CNI manager for ""
	I1001 18:45:56.699923   47159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:45:56.699935   47159 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:45:56.699958   47159 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-145303 NodeName:pause-145303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:45:56.700117   47159 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-145303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:45:56.700201   47159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1001 18:45:56.733148   47159 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:45:56.733223   47159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:45:56.768743   47159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 18:45:56.816022   47159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:45:56.878747   47159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1001 18:45:56.952919   47159 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1001 18:45:56.964582   47159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:45:57.277546   47159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:45:57.316516   47159 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303 for IP: 192.168.39.100
	I1001 18:45:57.316545   47159 certs.go:194] generating shared ca certs ...
	I1001 18:45:57.316566   47159 certs.go:226] acquiring lock for ca certs: {Name:mkce5c4f8bce1e11a833f05c4b70f07050ce8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:45:57.316785   47159 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key
	I1001 18:45:57.316891   47159 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key
	I1001 18:45:57.316908   47159 certs.go:256] generating profile certs ...
	I1001 18:45:57.317038   47159 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/client.key
	I1001 18:45:57.317145   47159 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/apiserver.key.9567faef
	I1001 18:45:57.317209   47159 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/proxy-client.key
	I1001 18:45:57.317380   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem (1338 bytes)
	W1001 18:45:57.317444   47159 certs.go:480] ignoring /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469_empty.pem, impossibly tiny 0 bytes
	I1001 18:45:57.317459   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:45:57.317490   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:45:57.317537   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:45:57.317573   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem (1675 bytes)
	I1001 18:45:57.317642   47159 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:45:57.318674   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:45:57.402191   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:45:57.466974   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:45:57.562152   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:45:57.641710   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 18:45:57.728872   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 18:45:57.831500   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:45:57.912154   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:45:57.965339   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:45:58.030659   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem --> /usr/share/ca-certificates/13469.pem (1338 bytes)
	I1001 18:45:58.082673   47159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /usr/share/ca-certificates/134692.pem (1708 bytes)
	I1001 18:45:58.132389   47159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:45:58.167628   47159 ssh_runner.go:195] Run: openssl version
	I1001 18:45:58.183894   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:45:58.209513   47159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:45:58.222508   47159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:45:58.222583   47159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:45:58.230874   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:45:58.263552   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13469.pem && ln -fs /usr/share/ca-certificates/13469.pem /etc/ssl/certs/13469.pem"
	I1001 18:45:58.308757   47159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13469.pem
	I1001 18:45:58.318302   47159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 17:56 /usr/share/ca-certificates/13469.pem
	I1001 18:45:58.318362   47159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13469.pem
	I1001 18:45:58.329064   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13469.pem /etc/ssl/certs/51391683.0"
	I1001 18:45:58.350892   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134692.pem && ln -fs /usr/share/ca-certificates/134692.pem /etc/ssl/certs/134692.pem"
	I1001 18:45:58.371373   47159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134692.pem
	I1001 18:45:58.378671   47159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 17:56 /usr/share/ca-certificates/134692.pem
	I1001 18:45:58.378740   47159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134692.pem
	I1001 18:45:58.395456   47159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134692.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 18:45:58.422846   47159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:45:58.433415   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 18:45:58.460738   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 18:45:58.481735   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 18:45:58.492279   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 18:45:58.501541   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 18:45:58.510080   47159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 18:45:58.519403   47159 kubeadm.go:392] StartCluster: {Name:pause-145303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-145303 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:45:58.519564   47159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:45:58.519625   47159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:45:58.567077   47159 cri.go:89] found id: "7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884"
	I1001 18:45:58.567102   47159 cri.go:89] found id: "51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea"
	I1001 18:45:58.567108   47159 cri.go:89] found id: "18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6"
	I1001 18:45:58.567113   47159 cri.go:89] found id: "1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a"
	I1001 18:45:58.567117   47159 cri.go:89] found id: "c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882"
	I1001 18:45:58.567122   47159 cri.go:89] found id: "383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6"
	I1001 18:45:58.567126   47159 cri.go:89] found id: "a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea"
	I1001 18:45:58.567130   47159 cri.go:89] found id: "4a1008cd6816f76c1385d97f09a2ae228e7cbede6f1175fc6c10c8d3ca70a51d"
	I1001 18:45:58.567142   47159 cri.go:89] found id: "2c58ab47806f0cd585528456f77358792af71aad909529896cec7d1facfd4001"
	I1001 18:45:58.567151   47159 cri.go:89] found id: "bdeec108f86f5cfb2f7d08015b084c5e5b8202940a55b5acbcda0265129fd4bf"
	I1001 18:45:58.567155   47159 cri.go:89] found id: "40f929bcdc178aec842ce93dd5d17aed517431c35b9d0d55384853e1be32b10e"
	I1001 18:45:58.567159   47159 cri.go:89] found id: "4b0c54b170ef27074b92d3c4355c71c32df8284932caa2dac4a7ce857d57a8dd"
	I1001 18:45:58.567162   47159 cri.go:89] found id: ""
	I1001 18:45:58.567207   47159 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-145303 -n pause-145303
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-145303 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-145303 logs -n 25: (1.546508824s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --cancel-scheduled                                                                                                                        │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ delete  │ -p scheduled-stop-899156                                                                                                                                           │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:44 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │                     │
	│ start   │ -p pause-145303 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-145303              │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p offline-crio-136397 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-136397       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p NoKubernetes-180525 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p stopped-upgrade-149070 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-149070    │ jenkins │ v1.32.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ delete  │ -p offline-crio-136397                                                                                                                                             │ offline-crio-136397       │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p pause-145303 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-145303              │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ stop    │ stopped-upgrade-149070 stop                                                                                                                                        │ stopped-upgrade-149070    │ jenkins │ v1.32.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p stopped-upgrade-149070 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ delete  │ -p NoKubernetes-180525                                                                                                                                             │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-130620                                                                                                                                       │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-149070 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ delete  │ -p stopped-upgrade-149070                                                                                                                                          │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ ssh     │ -p NoKubernetes-180525 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:46:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:46:10.641545   48033 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:46:10.641664   48033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:46:10.641673   48033 out.go:374] Setting ErrFile to fd 2...
	I1001 18:46:10.641677   48033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:46:10.641847   48033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:46:10.642262   48033 out.go:368] Setting JSON to false
	I1001 18:46:10.643163   48033 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5315,"bootTime":1759339056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:46:10.643264   48033 start.go:140] virtualization: kvm guest
	I1001 18:46:10.645250   48033 out.go:179] * [kubernetes-upgrade-130620] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:46:10.646545   48033 notify.go:220] Checking for updates...
	I1001 18:46:10.646570   48033 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:46:10.648004   48033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:46:10.649296   48033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:46:10.650510   48033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:46:10.651735   48033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:46:10.653000   48033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:46:10.654850   48033 config.go:182] Loaded profile config "kubernetes-upgrade-130620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1001 18:46:10.655415   48033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:10.655529   48033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:10.671162   48033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1001 18:46:10.671617   48033 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:10.672158   48033 main.go:141] libmachine: Using API Version  1
	I1001 18:46:10.672185   48033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:10.672554   48033 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:10.672805   48033 main.go:141] libmachine: (kubernetes-upgrade-130620) Calling .DriverName
	I1001 18:46:10.673051   48033 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:46:10.673350   48033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:10.673411   48033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:10.687744   48033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I1001 18:46:10.688165   48033 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:10.688641   48033 main.go:141] libmachine: Using API Version  1
	I1001 18:46:10.688679   48033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:10.689006   48033 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:10.689217   48033 main.go:141] libmachine: (kubernetes-upgrade-130620) Calling .DriverName
	I1001 18:46:10.725236   48033 out.go:179] * Using the kvm2 driver based on existing profile
	I1001 18:46:10.726475   48033 start.go:304] selected driver: kvm2
	I1001 18:46:10.726491   48033 start.go:921] validating driver "kvm2" against &{Name:kubernetes-upgrade-130620 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.0 ClusterName:kubernetes-upgrade-130620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.179 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:46:10.726601   48033 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:46:10.727262   48033 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:46:10.727356   48033 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:46:10.741529   48033 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:46:10.741570   48033 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 18:46:10.755858   48033 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 18:46:10.756274   48033 cni.go:84] Creating CNI manager for ""
	I1001 18:46:10.756338   48033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:46:10.756377   48033 start.go:348] cluster config:
	{Name:kubernetes-upgrade-130620 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-130620 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.179 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:46:10.756521   48033 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:46:10.759061   48033 out.go:179] * Starting "kubernetes-upgrade-130620" primary control-plane node in "kubernetes-upgrade-130620" cluster
	I1001 18:46:10.760148   48033 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:46:10.760188   48033 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1001 18:46:10.760202   48033 cache.go:58] Caching tarball of preloaded images
	I1001 18:46:10.760302   48033 preload.go:233] Found /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 18:46:10.760314   48033 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 18:46:10.760449   48033 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/config.json ...
	I1001 18:46:10.760692   48033 start.go:360] acquireMachinesLock for kubernetes-upgrade-130620: {Name:mk9cde4a6dd309a36e894aa2ddacad5574ffdbe7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 18:46:12.796889   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | SSH cmd err, output: exit status 255: 
	I1001 18:46:12.796909   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1001 18:46:12.796917   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | command : exit 0
	I1001 18:46:12.796922   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | err     : exit status 255
	I1001 18:46:12.796929   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | output  : 
	I1001 18:46:15.797592   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Getting to WaitForSSH function...
	I1001 18:46:15.800407   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.800835   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:15.800887   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.801019   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Using SSH client type: external
	I1001 18:46:15.801050   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa (-rw-------)
	I1001 18:46:15.801097   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:46:15.801114   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | About to run SSH command:
	I1001 18:46:15.801128   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | exit 0
	I1001 18:46:15.898505   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | SSH cmd err, output: <nil>: 
	I1001 18:46:15.899037   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetConfigRaw
	I1001 18:46:15.899952   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetIP
	I1001 18:46:15.903148   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.903667   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:15.903688   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.904026   47627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/config.json ...
	I1001 18:46:15.904238   47627 machine.go:93] provisionDockerMachine start ...
	I1001 18:46:15.904256   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:15.904467   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:15.907628   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.908006   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:15.908033   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:15.908137   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:15.908334   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:15.908515   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:15.908647   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:15.908863   47627 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:15.909099   47627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1001 18:46:15.909110   47627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 18:46:16.029327   47627 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 18:46:16.029352   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetMachineName
	I1001 18:46:16.029634   47627 buildroot.go:166] provisioning hostname "stopped-upgrade-149070"
	I1001 18:46:16.029682   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetMachineName
	I1001 18:46:16.029898   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:16.032486   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.032910   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.032937   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.033135   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:16.033295   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.033446   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.033574   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:16.033771   47627 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:16.033999   47627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1001 18:46:16.034020   47627 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-149070 && echo "stopped-upgrade-149070" | sudo tee /etc/hostname
	I1001 18:46:16.163184   47627 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-149070
	
	I1001 18:46:16.163230   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:16.166628   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.167016   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.167048   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.167227   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:16.167439   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.167617   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.167806   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:16.168011   47627 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:16.168280   47627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1001 18:46:16.168306   47627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-149070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-149070/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-149070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:46:16.296305   47627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:46:16.296331   47627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 18:46:16.296364   47627 buildroot.go:174] setting up certificates
	I1001 18:46:16.296376   47627 provision.go:84] configureAuth start
	I1001 18:46:16.296389   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetMachineName
	I1001 18:46:16.296703   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetIP
	I1001 18:46:16.299616   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.299923   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.299955   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.300166   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:16.302546   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.302872   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.302893   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.303097   47627 provision.go:143] copyHostCerts
	I1001 18:46:16.303162   47627 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem, removing ...
	I1001 18:46:16.303176   47627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:46:16.303231   47627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 18:46:16.303329   47627 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem, removing ...
	I1001 18:46:16.303337   47627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:46:16.303358   47627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 18:46:16.303422   47627 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem, removing ...
	I1001 18:46:16.303453   47627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:46:16.303492   47627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 18:46:16.303564   47627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-149070 san=[127.0.0.1 192.168.72.10 localhost minikube stopped-upgrade-149070]
	I1001 18:46:16.735970   47627 provision.go:177] copyRemoteCerts
	I1001 18:46:16.736041   47627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:46:16.736068   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:16.739286   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.739705   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.739745   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.739931   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:16.740127   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.740291   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:16.740417   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	I1001 18:46:16.827810   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:46:16.849084   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1001 18:46:16.869535   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 18:46:16.889515   47627 provision.go:87] duration metric: took 593.124293ms to configureAuth
	I1001 18:46:16.889551   47627 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:46:16.889723   47627 config.go:182] Loaded profile config "stopped-upgrade-149070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1001 18:46:16.889796   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:16.892801   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.893210   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:16.893260   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:16.893404   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:16.893654   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.893804   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:16.893957   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:16.894111   47627 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:16.894327   47627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1001 18:46:16.894348   47627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:46:17.443688   47871 start.go:364] duration metric: took 11.924368472s to acquireMachinesLock for "NoKubernetes-180525"
	I1001 18:46:17.443739   47871 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-180525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.
0 ClusterName:NoKubernetes-180525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:46:17.443852   47871 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 18:46:17.194182   47627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:46:17.194214   47627 machine.go:96] duration metric: took 1.289962198s to provisionDockerMachine
	I1001 18:46:17.194228   47627 start.go:293] postStartSetup for "stopped-upgrade-149070" (driver="kvm2")
	I1001 18:46:17.194238   47627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:46:17.194258   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:17.194625   47627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:46:17.194649   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:17.197683   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.198055   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:17.198097   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.198326   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:17.198539   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:17.198723   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:17.198892   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	I1001 18:46:17.287059   47627 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:46:17.290555   47627 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 18:46:17.290576   47627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 18:46:17.290637   47627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 18:46:17.290717   47627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> 134692.pem in /etc/ssl/certs
	I1001 18:46:17.290814   47627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 18:46:17.297965   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:46:17.318873   47627 start.go:296] duration metric: took 124.631106ms for postStartSetup
	I1001 18:46:17.318908   47627 fix.go:56] duration metric: took 16.61271894s for fixHost
	I1001 18:46:17.318926   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:17.322067   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.322399   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:17.322454   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.322642   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:17.322822   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:17.322953   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:17.323052   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:17.323256   47627 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:17.323525   47627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1001 18:46:17.323539   47627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:46:17.443555   47627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759344377.399280974
	
	I1001 18:46:17.443575   47627 fix.go:216] guest clock: 1759344377.399280974
	I1001 18:46:17.443581   47627 fix.go:229] Guest: 2025-10-01 18:46:17.399280974 +0000 UTC Remote: 2025-10-01 18:46:17.318911661 +0000 UTC m=+25.215217352 (delta=80.369313ms)
	I1001 18:46:17.443599   47627 fix.go:200] guest clock delta is within tolerance: 80.369313ms
	I1001 18:46:17.443603   47627 start.go:83] releasing machines lock for "stopped-upgrade-149070", held for 16.737443977s
	I1001 18:46:17.443625   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:17.443918   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetIP
	I1001 18:46:17.446981   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.447367   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:17.447416   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.447526   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:17.448073   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:17.448272   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:17.448364   47627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:46:17.448416   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:17.448472   47627 ssh_runner.go:195] Run: cat /version.json
	I1001 18:46:17.448501   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:17.451875   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.452124   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.452303   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:17.452329   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.452551   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:17.452760   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:17.452766   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:17.452787   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:17.452930   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:17.453015   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:17.453093   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	I1001 18:46:17.453176   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:17.453310   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:17.453445   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	W1001 18:46:17.544605   47627 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1001 18:46:17.544686   47627 ssh_runner.go:195] Run: systemctl --version
	I1001 18:46:17.575922   47627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:46:17.720888   47627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:46:17.725996   47627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:46:17.726079   47627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:46:17.740937   47627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:46:17.740961   47627 start.go:495] detecting cgroup driver to use...
	I1001 18:46:17.741057   47627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:46:17.753934   47627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:46:17.766000   47627 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:46:17.766053   47627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:46:17.778634   47627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:46:17.790002   47627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:46:17.891997   47627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:46:18.012930   47627 docker.go:234] disabling docker service ...
	I1001 18:46:18.013022   47627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:46:18.025094   47627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:46:18.035595   47627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:46:18.137099   47627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:46:18.244304   47627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:46:18.255778   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:46:18.271028   47627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1001 18:46:18.271083   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.280114   47627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:46:18.280173   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.289097   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.298243   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.307175   47627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:46:18.317649   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.327184   47627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.342686   47627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:18.352169   47627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:46:18.360739   47627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:46:18.360790   47627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:46:18.374022   47627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:46:18.382556   47627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:46:18.494040   47627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:46:18.641361   47627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:46:18.641455   47627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:46:18.645598   47627 start.go:563] Will wait 60s for crictl version
	I1001 18:46:18.645654   47627 ssh_runner.go:195] Run: which crictl
	I1001 18:46:18.648906   47627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:46:18.685939   47627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1001 18:46:18.686021   47627 ssh_runner.go:195] Run: crio --version
	I1001 18:46:18.744854   47627 ssh_runner.go:195] Run: crio --version
	I1001 18:46:18.788503   47627 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1001 18:46:17.445563   47871 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 18:46:17.445785   47871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:17.445839   47871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:17.461112   47871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I1001 18:46:17.461550   47871 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:17.462028   47871 main.go:141] libmachine: Using API Version  1
	I1001 18:46:17.462043   47871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:17.462362   47871 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:17.462559   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetMachineName
	I1001 18:46:17.462698   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:17.462828   47871 start.go:159] libmachine.API.Create for "NoKubernetes-180525" (driver="kvm2")
	I1001 18:46:17.462860   47871 client.go:168] LocalClient.Create starting
	I1001 18:46:17.462903   47871 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem
	I1001 18:46:17.462942   47871 main.go:141] libmachine: Decoding PEM data...
	I1001 18:46:17.462962   47871 main.go:141] libmachine: Parsing certificate...
	I1001 18:46:17.463038   47871 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem
	I1001 18:46:17.463069   47871 main.go:141] libmachine: Decoding PEM data...
	I1001 18:46:17.463085   47871 main.go:141] libmachine: Parsing certificate...
	I1001 18:46:17.463113   47871 main.go:141] libmachine: Running pre-create checks...
	I1001 18:46:17.463141   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .PreCreateCheck
	I1001 18:46:17.463525   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetConfigRaw
	I1001 18:46:17.463926   47871 main.go:141] libmachine: Creating machine...
	I1001 18:46:17.463939   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .Create
	I1001 18:46:17.464051   47871 main.go:141] libmachine: (NoKubernetes-180525) creating domain...
	I1001 18:46:17.464073   47871 main.go:141] libmachine: (NoKubernetes-180525) creating network...
	I1001 18:46:17.465456   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found existing default network
	I1001 18:46:17.465659   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | <network connections='2'>
	I1001 18:46:17.465682   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <name>default</name>
	I1001 18:46:17.465695   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1001 18:46:17.465707   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <forward mode='nat'>
	I1001 18:46:17.465721   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <nat>
	I1001 18:46:17.465728   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <port start='1024' end='65535'/>
	I1001 18:46:17.465743   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </nat>
	I1001 18:46:17.465758   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </forward>
	I1001 18:46:17.465779   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1001 18:46:17.465809   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1001 18:46:17.465825   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1001 18:46:17.465834   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <dhcp>
	I1001 18:46:17.465844   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1001 18:46:17.465854   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </dhcp>
	I1001 18:46:17.465862   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </ip>
	I1001 18:46:17.465876   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | </network>
	I1001 18:46:17.465893   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | 
	I1001 18:46:17.466471   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:17.466325   48098 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:de:40} reservation:<nil>}
	I1001 18:46:17.467188   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:17.467090   48098 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000244130}
	I1001 18:46:17.467213   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | defining private network:
	I1001 18:46:17.467223   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | 
	I1001 18:46:17.467247   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | <network>
	I1001 18:46:17.467259   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <name>mk-NoKubernetes-180525</name>
	I1001 18:46:17.467266   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <dns enable='no'/>
	I1001 18:46:17.467281   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1001 18:46:17.467292   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <dhcp>
	I1001 18:46:17.467302   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1001 18:46:17.467313   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </dhcp>
	I1001 18:46:17.467321   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </ip>
	I1001 18:46:17.467336   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | </network>
	I1001 18:46:17.467366   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | 
	I1001 18:46:17.473155   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | creating private network mk-NoKubernetes-180525 192.168.50.0/24...
	I1001 18:46:17.546666   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | private network mk-NoKubernetes-180525 192.168.50.0/24 created
	I1001 18:46:17.546922   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | <network>
	I1001 18:46:17.546942   47871 main.go:141] libmachine: (NoKubernetes-180525) setting up store path in /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525 ...
	I1001 18:46:17.546951   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <name>mk-NoKubernetes-180525</name>
	I1001 18:46:17.546982   47871 main.go:141] libmachine: (NoKubernetes-180525) building disk image from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1001 18:46:17.546991   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <uuid>f88b91cc-c3cf-4874-b724-c88482fd8505</uuid>
	I1001 18:46:17.547004   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1001 18:46:17.547029   47871 main.go:141] libmachine: (NoKubernetes-180525) Downloading /home/jenkins/minikube-integration/21631-9542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1001 18:46:17.547040   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <mac address='52:54:00:ad:54:0f'/>
	I1001 18:46:17.547052   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <dns enable='no'/>
	I1001 18:46:17.547065   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1001 18:46:17.547073   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <dhcp>
	I1001 18:46:17.547083   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1001 18:46:17.547091   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </dhcp>
	I1001 18:46:17.547103   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </ip>
	I1001 18:46:17.547110   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | </network>
	I1001 18:46:17.547125   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | 
	I1001 18:46:17.547146   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:17.546910   48098 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:46:17.769962   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:17.769847   48098 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa...
	I1001 18:46:18.067675   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:18.067544   48098 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/NoKubernetes-180525.rawdisk...
	I1001 18:46:18.067701   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | Writing magic tar header
	I1001 18:46:18.067718   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | Writing SSH key tar header
	I1001 18:46:18.067728   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:18.067668   48098 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525 ...
	I1001 18:46:18.067845   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525
	I1001 18:46:18.067908   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525 (perms=drwx------)
	I1001 18:46:18.067923   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube/machines
	I1001 18:46:18.067942   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:46:18.067985   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube/machines (perms=drwxr-xr-x)
	I1001 18:46:18.068007   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins/minikube-integration/21631-9542/.minikube (perms=drwxr-xr-x)
	I1001 18:46:18.068020   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21631-9542
	I1001 18:46:18.068038   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins/minikube-integration/21631-9542 (perms=drwxrwxr-x)
	I1001 18:46:18.068055   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1001 18:46:18.068071   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 18:46:18.068085   47871 main.go:141] libmachine: (NoKubernetes-180525) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 18:46:18.068102   47871 main.go:141] libmachine: (NoKubernetes-180525) defining domain...
	I1001 18:46:18.068116   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home/jenkins
	I1001 18:46:18.068128   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | checking permissions on dir: /home
	I1001 18:46:18.068142   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | skipping /home - not owner
	I1001 18:46:18.069185   47871 main.go:141] libmachine: (NoKubernetes-180525) defining domain using XML: 
	I1001 18:46:18.069211   47871 main.go:141] libmachine: (NoKubernetes-180525) <domain type='kvm'>
	I1001 18:46:18.069222   47871 main.go:141] libmachine: (NoKubernetes-180525)   <name>NoKubernetes-180525</name>
	I1001 18:46:18.069229   47871 main.go:141] libmachine: (NoKubernetes-180525)   <memory unit='MiB'>3072</memory>
	I1001 18:46:18.069238   47871 main.go:141] libmachine: (NoKubernetes-180525)   <vcpu>2</vcpu>
	I1001 18:46:18.069244   47871 main.go:141] libmachine: (NoKubernetes-180525)   <features>
	I1001 18:46:18.069252   47871 main.go:141] libmachine: (NoKubernetes-180525)     <acpi/>
	I1001 18:46:18.069262   47871 main.go:141] libmachine: (NoKubernetes-180525)     <apic/>
	I1001 18:46:18.069280   47871 main.go:141] libmachine: (NoKubernetes-180525)     <pae/>
	I1001 18:46:18.069291   47871 main.go:141] libmachine: (NoKubernetes-180525)   </features>
	I1001 18:46:18.069300   47871 main.go:141] libmachine: (NoKubernetes-180525)   <cpu mode='host-passthrough'>
	I1001 18:46:18.069307   47871 main.go:141] libmachine: (NoKubernetes-180525)   </cpu>
	I1001 18:46:18.069316   47871 main.go:141] libmachine: (NoKubernetes-180525)   <os>
	I1001 18:46:18.069324   47871 main.go:141] libmachine: (NoKubernetes-180525)     <type>hvm</type>
	I1001 18:46:18.069333   47871 main.go:141] libmachine: (NoKubernetes-180525)     <boot dev='cdrom'/>
	I1001 18:46:18.069340   47871 main.go:141] libmachine: (NoKubernetes-180525)     <boot dev='hd'/>
	I1001 18:46:18.069350   47871 main.go:141] libmachine: (NoKubernetes-180525)     <bootmenu enable='no'/>
	I1001 18:46:18.069356   47871 main.go:141] libmachine: (NoKubernetes-180525)   </os>
	I1001 18:46:18.069367   47871 main.go:141] libmachine: (NoKubernetes-180525)   <devices>
	I1001 18:46:18.069377   47871 main.go:141] libmachine: (NoKubernetes-180525)     <disk type='file' device='cdrom'>
	I1001 18:46:18.069395   47871 main.go:141] libmachine: (NoKubernetes-180525)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/boot2docker.iso'/>
	I1001 18:46:18.069406   47871 main.go:141] libmachine: (NoKubernetes-180525)       <target dev='hdc' bus='scsi'/>
	I1001 18:46:18.069414   47871 main.go:141] libmachine: (NoKubernetes-180525)       <readonly/>
	I1001 18:46:18.069423   47871 main.go:141] libmachine: (NoKubernetes-180525)     </disk>
	I1001 18:46:18.069461   47871 main.go:141] libmachine: (NoKubernetes-180525)     <disk type='file' device='disk'>
	I1001 18:46:18.069478   47871 main.go:141] libmachine: (NoKubernetes-180525)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 18:46:18.069496   47871 main.go:141] libmachine: (NoKubernetes-180525)       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/NoKubernetes-180525.rawdisk'/>
	I1001 18:46:18.069506   47871 main.go:141] libmachine: (NoKubernetes-180525)       <target dev='hda' bus='virtio'/>
	I1001 18:46:18.069514   47871 main.go:141] libmachine: (NoKubernetes-180525)     </disk>
	I1001 18:46:18.069524   47871 main.go:141] libmachine: (NoKubernetes-180525)     <interface type='network'>
	I1001 18:46:18.069533   47871 main.go:141] libmachine: (NoKubernetes-180525)       <source network='mk-NoKubernetes-180525'/>
	I1001 18:46:18.069543   47871 main.go:141] libmachine: (NoKubernetes-180525)       <model type='virtio'/>
	I1001 18:46:18.069558   47871 main.go:141] libmachine: (NoKubernetes-180525)     </interface>
	I1001 18:46:18.069576   47871 main.go:141] libmachine: (NoKubernetes-180525)     <interface type='network'>
	I1001 18:46:18.069602   47871 main.go:141] libmachine: (NoKubernetes-180525)       <source network='default'/>
	I1001 18:46:18.069633   47871 main.go:141] libmachine: (NoKubernetes-180525)       <model type='virtio'/>
	I1001 18:46:18.069643   47871 main.go:141] libmachine: (NoKubernetes-180525)     </interface>
	I1001 18:46:18.069658   47871 main.go:141] libmachine: (NoKubernetes-180525)     <serial type='pty'>
	I1001 18:46:18.069669   47871 main.go:141] libmachine: (NoKubernetes-180525)       <target port='0'/>
	I1001 18:46:18.069679   47871 main.go:141] libmachine: (NoKubernetes-180525)     </serial>
	I1001 18:46:18.069687   47871 main.go:141] libmachine: (NoKubernetes-180525)     <console type='pty'>
	I1001 18:46:18.069699   47871 main.go:141] libmachine: (NoKubernetes-180525)       <target type='serial' port='0'/>
	I1001 18:46:18.069707   47871 main.go:141] libmachine: (NoKubernetes-180525)     </console>
	I1001 18:46:18.069717   47871 main.go:141] libmachine: (NoKubernetes-180525)     <rng model='virtio'>
	I1001 18:46:18.069727   47871 main.go:141] libmachine: (NoKubernetes-180525)       <backend model='random'>/dev/random</backend>
	I1001 18:46:18.069753   47871 main.go:141] libmachine: (NoKubernetes-180525)     </rng>
	I1001 18:46:18.069763   47871 main.go:141] libmachine: (NoKubernetes-180525)   </devices>
	I1001 18:46:18.069769   47871 main.go:141] libmachine: (NoKubernetes-180525) </domain>
	I1001 18:46:18.069783   47871 main.go:141] libmachine: (NoKubernetes-180525) 
	I1001 18:46:18.074236   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:1c:db:25 in network default
	I1001 18:46:18.074732   47871 main.go:141] libmachine: (NoKubernetes-180525) starting domain...
	I1001 18:46:18.074758   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:18.074767   47871 main.go:141] libmachine: (NoKubernetes-180525) ensuring networks are active...
	I1001 18:46:18.075477   47871 main.go:141] libmachine: (NoKubernetes-180525) Ensuring network default is active
	I1001 18:46:18.075775   47871 main.go:141] libmachine: (NoKubernetes-180525) Ensuring network mk-NoKubernetes-180525 is active
	I1001 18:46:18.076392   47871 main.go:141] libmachine: (NoKubernetes-180525) getting domain XML...
	I1001 18:46:18.077319   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | starting domain XML:
	I1001 18:46:18.077335   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | <domain type='kvm'>
	I1001 18:46:18.077345   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <name>NoKubernetes-180525</name>
	I1001 18:46:18.077355   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <uuid>3b57dc7a-3413-424e-baa7-3667d000f495</uuid>
	I1001 18:46:18.077364   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <memory unit='KiB'>3145728</memory>
	I1001 18:46:18.077372   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1001 18:46:18.077381   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <vcpu placement='static'>2</vcpu>
	I1001 18:46:18.077386   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <os>
	I1001 18:46:18.077393   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1001 18:46:18.077398   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <boot dev='cdrom'/>
	I1001 18:46:18.077404   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <boot dev='hd'/>
	I1001 18:46:18.077414   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <bootmenu enable='no'/>
	I1001 18:46:18.077425   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </os>
	I1001 18:46:18.077452   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <features>
	I1001 18:46:18.077481   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <acpi/>
	I1001 18:46:18.077503   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <apic/>
	I1001 18:46:18.077513   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <pae/>
	I1001 18:46:18.077520   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </features>
	I1001 18:46:18.077533   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1001 18:46:18.077557   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <clock offset='utc'/>
	I1001 18:46:18.077570   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <on_poweroff>destroy</on_poweroff>
	I1001 18:46:18.077578   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <on_reboot>restart</on_reboot>
	I1001 18:46:18.077590   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <on_crash>destroy</on_crash>
	I1001 18:46:18.077599   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   <devices>
	I1001 18:46:18.077611   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1001 18:46:18.077623   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <disk type='file' device='cdrom'>
	I1001 18:46:18.077634   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <driver name='qemu' type='raw'/>
	I1001 18:46:18.077647   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/boot2docker.iso'/>
	I1001 18:46:18.077662   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <target dev='hdc' bus='scsi'/>
	I1001 18:46:18.077671   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <readonly/>
	I1001 18:46:18.077681   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1001 18:46:18.077689   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </disk>
	I1001 18:46:18.077697   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <disk type='file' device='disk'>
	I1001 18:46:18.077708   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1001 18:46:18.077732   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <source file='/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/NoKubernetes-180525.rawdisk'/>
	I1001 18:46:18.077756   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <target dev='hda' bus='virtio'/>
	I1001 18:46:18.077781   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1001 18:46:18.077795   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </disk>
	I1001 18:46:18.077806   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1001 18:46:18.077821   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1001 18:46:18.077831   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </controller>
	I1001 18:46:18.077841   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1001 18:46:18.077849   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1001 18:46:18.077872   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1001 18:46:18.077890   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </controller>
	I1001 18:46:18.077900   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <interface type='network'>
	I1001 18:46:18.077911   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <mac address='52:54:00:ed:0e:19'/>
	I1001 18:46:18.077921   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <source network='mk-NoKubernetes-180525'/>
	I1001 18:46:18.077932   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <model type='virtio'/>
	I1001 18:46:18.077942   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1001 18:46:18.077949   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </interface>
	I1001 18:46:18.077958   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <interface type='network'>
	I1001 18:46:18.077974   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <mac address='52:54:00:1c:db:25'/>
	I1001 18:46:18.077987   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <source network='default'/>
	I1001 18:46:18.077994   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <model type='virtio'/>
	I1001 18:46:18.078008   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1001 18:46:18.078031   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </interface>
	I1001 18:46:18.078052   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <serial type='pty'>
	I1001 18:46:18.078069   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <target type='isa-serial' port='0'>
	I1001 18:46:18.078081   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |         <model name='isa-serial'/>
	I1001 18:46:18.078093   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       </target>
	I1001 18:46:18.078105   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </serial>
	I1001 18:46:18.078117   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <console type='pty'>
	I1001 18:46:18.078132   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <target type='serial' port='0'/>
	I1001 18:46:18.078142   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </console>
	I1001 18:46:18.078155   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <input type='mouse' bus='ps2'/>
	I1001 18:46:18.078170   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <input type='keyboard' bus='ps2'/>
	I1001 18:46:18.078179   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <audio id='1' type='none'/>
	I1001 18:46:18.078187   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <memballoon model='virtio'>
	I1001 18:46:18.078198   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1001 18:46:18.078206   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </memballoon>
	I1001 18:46:18.078212   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     <rng model='virtio'>
	I1001 18:46:18.078224   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <backend model='random'>/dev/random</backend>
	I1001 18:46:18.078244   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1001 18:46:18.078257   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |     </rng>
	I1001 18:46:18.078268   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG |   </devices>
	I1001 18:46:18.078275   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | </domain>
	I1001 18:46:18.078286   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | 
	I1001 18:46:19.552290   47871 main.go:141] libmachine: (NoKubernetes-180525) waiting for domain to start...
	I1001 18:46:19.554248   47871 main.go:141] libmachine: (NoKubernetes-180525) domain is now running
	I1001 18:46:19.554271   47871 main.go:141] libmachine: (NoKubernetes-180525) waiting for IP...
	I1001 18:46:19.555398   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:19.556068   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:19.556086   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:19.556501   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:19.556700   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:19.556650   48098 retry.go:31] will retry after 259.373333ms: waiting for domain to come up
	I1001 18:46:19.817649   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:19.818348   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:19.818381   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:19.818744   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:19.818770   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:19.818669   48098 retry.go:31] will retry after 234.759346ms: waiting for domain to come up
	I1001 18:46:19.566275   47159 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884 51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea 18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6 1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882 383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6 a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea 4a1008cd6816f76c1385d97f09a2ae228e7cbede6f1175fc6c10c8d3ca70a51d 2c58ab47806f0cd585528456f77358792af71aad909529896cec7d1facfd4001 bdeec108f86f5cfb2f7d08015b084c5e5b8202940a55b5acbcda0265129fd4bf 40f929bcdc178aec842ce93dd5d17aed517431c35b9d0d55384853e1be32b10e 4b0c54b170ef27074b92d3c4355c71c32df8284932caa2dac4a7ce857d57a8dd: (20.846583413s)
	W1001 18:46:19.566402   47159 kubeadm.go:640] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884 51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea 18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6 1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882 383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6 a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea 4a1008cd6816f76c1385d97f09a2ae228e7cbede6f1175fc6c10c8d3ca70a51d 2c58ab47806f0cd585528456f77358792af71aad909529896cec7d1facfd4001 bdeec108f86f5cfb2f7d08015b084c5e5b8202940a55b5acbcda0265129fd4bf 40f929bcdc178aec842ce93dd5d17aed517431c35b9d0d55384853e1be32b10e 4b0c54b170ef27074b92d3c4355c71c32df8284932caa2dac4a7ce857d57a8dd: Process exited with status 1
	stdout:
	7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884
	51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea
	18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6
	1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a
	c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882
	383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6
	
	stderr:
	E1001 18:46:19.558789    3380 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea\": container with ID starting with a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea not found: ID does not exist" containerID="a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea"
	time="2025-10-01T18:46:19Z" level=fatal msg="stopping the container \"a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea\": rpc error: code = NotFound desc = could not find container \"a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea\": container with ID starting with a16475d3f753f44c2c1b4a9b9b04235d6bfd0b681e5fa8787774b9dc4b577aea not found: ID does not exist"
	I1001 18:46:19.566507   47159 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 18:46:19.616466   47159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:46:19.638747   47159 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  1 18:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Oct  1 18:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  1 18:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  1 18:44 /etc/kubernetes/scheduler.conf
	
	I1001 18:46:19.638815   47159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:46:19.657092   47159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:46:19.674039   47159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:46:19.674129   47159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:46:19.695836   47159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:46:19.714095   47159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:46:19.714162   47159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:46:19.730338   47159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:46:19.745613   47159 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:46:19.745676   47159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:46:19.762228   47159 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:46:19.781887   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:19.869850   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:18.789841   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetIP
	I1001 18:46:18.793452   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:18.793899   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:18.793933   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:18.794178   47627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 18:46:18.798816   47627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:46:18.810968   47627 kubeadm.go:875] updating cluster {Name:stopped-upgrade-149070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stop
ped-upgrade-149070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:46:18.811097   47627 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1001 18:46:18.811149   47627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:46:18.850557   47627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1001 18:46:18.850703   47627 ssh_runner.go:195] Run: which lz4
	I1001 18:46:18.855278   47627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 18:46:18.859960   47627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 18:46:18.859998   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1001 18:46:20.466579   47627 crio.go:462] duration metric: took 1.611335262s to copy over tarball
	I1001 18:46:20.466684   47627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 18:46:20.055548   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:20.058483   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:20.058503   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:20.058517   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:20.058528   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:20.057290   48098 retry.go:31] will retry after 485.024113ms: waiting for domain to come up
	I1001 18:46:20.543974   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:20.544671   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:20.544714   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:20.545090   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:20.545118   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:20.545028   48098 retry.go:31] will retry after 585.246435ms: waiting for domain to come up
	I1001 18:46:21.132160   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:21.132905   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:21.132937   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:21.133326   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:21.133358   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:21.133294   48098 retry.go:31] will retry after 613.783979ms: waiting for domain to come up
	I1001 18:46:21.748942   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:21.749610   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:21.749643   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:21.750029   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:21.750057   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:21.749995   48098 retry.go:31] will retry after 851.636591ms: waiting for domain to come up
	I1001 18:46:22.603488   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:22.604206   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:22.604287   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:22.604542   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:22.604807   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:22.604679   48098 retry.go:31] will retry after 1.121086357s: waiting for domain to come up
	I1001 18:46:23.727857   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:23.730250   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:23.730331   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:23.730578   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:23.730623   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:23.730565   48098 retry.go:31] will retry after 1.077560875s: waiting for domain to come up
	I1001 18:46:24.809670   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:24.810303   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:24.810324   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:24.810701   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:24.810740   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:24.810678   48098 retry.go:31] will retry after 1.378122575s: waiting for domain to come up
	I1001 18:46:21.944843   47159 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.074947008s)
	I1001 18:46:21.944885   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:22.304387   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:22.397872   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:22.490442   47159 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:46:22.490525   47159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:22.990692   47159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:23.490785   47159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:23.548636   47159 api_server.go:72] duration metric: took 1.058217154s to wait for apiserver process to appear ...
	I1001 18:46:23.548670   47159 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:46:23.548690   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:26.318375   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:46:26.318412   47159 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:46:26.318445   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:26.344258   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:46:26.344295   47159 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:46:23.650402   47627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18366121s)
	I1001 18:46:23.650459   47627 crio.go:469] duration metric: took 3.183844827s to extract the tarball
	I1001 18:46:23.650486   47627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 18:46:23.716394   47627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:46:23.774928   47627 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:46:23.774953   47627 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:46:23.774960   47627 kubeadm.go:926] updating node { 192.168.72.10 8443 v1.28.3 crio true true} ...
	I1001 18:46:23.775069   47627 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=stopped-upgrade-149070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-149070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:46:23.775135   47627 ssh_runner.go:195] Run: crio config
	I1001 18:46:23.838380   47627 cni.go:84] Creating CNI manager for ""
	I1001 18:46:23.838408   47627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:46:23.838421   47627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:46:23.838464   47627 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-149070 NodeName:stopped-upgrade-149070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:46:23.838636   47627 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-149070"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:46:23.838712   47627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1001 18:46:23.849021   47627 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:46:23.849087   47627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:46:23.859769   47627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1001 18:46:23.875540   47627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:46:23.891216   47627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1001 18:46:23.910107   47627 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1001 18:46:23.913520   47627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:46:23.925979   47627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:46:24.055604   47627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:46:24.069661   47627 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070 for IP: 192.168.72.10
	I1001 18:46:24.069683   47627 certs.go:194] generating shared ca certs ...
	I1001 18:46:24.069730   47627 certs.go:226] acquiring lock for ca certs: {Name:mkce5c4f8bce1e11a833f05c4b70f07050ce8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:24.069908   47627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key
	I1001 18:46:24.069971   47627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key
	I1001 18:46:24.069984   47627 certs.go:256] generating profile certs ...
	I1001 18:46:24.070100   47627 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/client.key
	I1001 18:46:24.070135   47627 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key.4c573dc2
	I1001 18:46:24.070158   47627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt.4c573dc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.10]
	I1001 18:46:24.317895   47627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt.4c573dc2 ...
	I1001 18:46:24.317930   47627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt.4c573dc2: {Name:mk920d12e7e6ee066b229dfe29d025863ac546f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:24.318125   47627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key.4c573dc2 ...
	I1001 18:46:24.318143   47627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key.4c573dc2: {Name:mk96ff90c1fbd6e9b1764385a204251f65e8a9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:24.318243   47627 certs.go:381] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt.4c573dc2 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt
	I1001 18:46:24.335091   47627 certs.go:385] copying /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key.4c573dc2 -> /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key
	I1001 18:46:24.335413   47627 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/proxy-client.key
	I1001 18:46:24.335592   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem (1338 bytes)
	W1001 18:46:24.335633   47627 certs.go:480] ignoring /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469_empty.pem, impossibly tiny 0 bytes
	I1001 18:46:24.335646   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:46:24.335679   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:46:24.335712   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:46:24.335741   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem (1675 bytes)
	I1001 18:46:24.335795   47627 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:46:24.336604   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:46:24.360204   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 18:46:24.383978   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:46:24.412711   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:46:24.432777   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 18:46:24.453037   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 18:46:24.476372   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:46:24.496817   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:46:24.517891   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:46:24.538032   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/13469.pem --> /usr/share/ca-certificates/13469.pem (1338 bytes)
	I1001 18:46:24.561775   47627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /usr/share/ca-certificates/134692.pem (1708 bytes)
	I1001 18:46:24.582571   47627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:46:24.599771   47627 ssh_runner.go:195] Run: openssl version
	I1001 18:46:24.606409   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:46:24.615967   47627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:46:24.620068   47627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:46:24.620124   47627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:46:24.625069   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:46:24.634035   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13469.pem && ln -fs /usr/share/ca-certificates/13469.pem /etc/ssl/certs/13469.pem"
	I1001 18:46:24.642830   47627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13469.pem
	I1001 18:46:24.647240   47627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 17:56 /usr/share/ca-certificates/13469.pem
	I1001 18:46:24.647301   47627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13469.pem
	I1001 18:46:24.652252   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13469.pem /etc/ssl/certs/51391683.0"
	I1001 18:46:24.661728   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134692.pem && ln -fs /usr/share/ca-certificates/134692.pem /etc/ssl/certs/134692.pem"
	I1001 18:46:24.671357   47627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134692.pem
	I1001 18:46:24.676032   47627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 17:56 /usr/share/ca-certificates/134692.pem
	I1001 18:46:24.676121   47627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134692.pem
	I1001 18:46:24.682927   47627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134692.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 18:46:24.694383   47627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:46:24.699650   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 18:46:24.705900   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 18:46:24.711008   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 18:46:24.716222   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 18:46:24.721255   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 18:46:24.726228   47627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 18:46:24.731317   47627 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-149070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped
-upgrade-149070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:46:24.731419   47627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:46:24.731504   47627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:46:24.765706   47627 cri.go:89] found id: ""
	I1001 18:46:24.765787   47627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1001 18:46:24.774670   47627 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I1001 18:46:24.774692   47627 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 18:46:24.774698   47627 kubeadm.go:589] restartPrimaryControlPlane start ...
	I1001 18:46:24.774746   47627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 18:46:24.787074   47627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:46:24.787731   47627 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-149070" does not appear in /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:46:24.788041   47627 kubeconfig.go:62] /home/jenkins/minikube-integration/21631-9542/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-149070" cluster setting kubeconfig missing "stopped-upgrade-149070" context setting]
	I1001 18:46:24.788707   47627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:24.794461   47627 kapi.go:59] client config for stopped-upgrade-149070: &rest.Config{Host:"https://192.168.72.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:46:24.795030   47627 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1001 18:46:24.795049   47627 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1001 18:46:24.795056   47627 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 18:46:24.795061   47627 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1001 18:46:24.795067   47627 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 18:46:24.795457   47627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 18:46:24.808259   47627 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1001 18:46:24.808278   47627 kubeadm.go:1152] stopping kube-system containers ...
	I1001 18:46:24.808289   47627 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 18:46:24.808337   47627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:46:24.868471   47627 cri.go:89] found id: ""
	I1001 18:46:24.868577   47627 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 18:46:24.887950   47627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:46:24.898280   47627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:46:24.898304   47627 kubeadm.go:157] found existing configuration files:
	
	I1001 18:46:24.898356   47627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1001 18:46:24.908026   47627 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:46:24.908096   47627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:46:24.918587   47627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1001 18:46:24.928495   47627 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:46:24.928559   47627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:46:24.939199   47627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1001 18:46:24.948745   47627 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:46:24.948817   47627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:46:24.958939   47627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1001 18:46:24.968570   47627 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:46:24.968641   47627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:46:24.979366   47627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:46:24.989808   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:25.154973   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:25.783131   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:25.995451   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:26.072788   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:26.189129   47627 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:46:26.189217   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:26.690136   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:26.548838   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:26.554613   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:46:26.554650   47159 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:46:27.049597   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:27.060963   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:46:27.061003   47159 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:46:27.549753   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:27.557819   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:46:27.557856   47159 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:46:28.049601   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:28.056727   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1001 18:46:28.065296   47159 api_server.go:141] control plane version: v1.34.1
	I1001 18:46:28.065333   47159 api_server.go:131] duration metric: took 4.51665429s to wait for apiserver health ...
	I1001 18:46:28.065406   47159 cni.go:84] Creating CNI manager for ""
	I1001 18:46:28.065420   47159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:46:28.067408   47159 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:46:28.068795   47159 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:46:28.081416   47159 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:46:28.113903   47159 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:46:28.118246   47159 system_pods.go:59] 6 kube-system pods found
	I1001 18:46:28.118298   47159 system_pods.go:61] "coredns-66bc5c9577-d67rw" [7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:46:28.118313   47159 system_pods.go:61] "etcd-pause-145303" [578fdb30-9e55-4726-9f91-5ff40e3d386e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:46:28.118326   47159 system_pods.go:61] "kube-apiserver-pause-145303" [558bec2a-25fc-43ec-af2f-1225c6d3a7ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:46:28.118354   47159 system_pods.go:61] "kube-controller-manager-pause-145303" [12407246-6706-4bd2-ad63-59d92ae8383d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:46:28.118360   47159 system_pods.go:61] "kube-proxy-wh8vc" [cd499d04-e196-4e19-ad92-5e1a46fc3d51] Running
	I1001 18:46:28.118373   47159 system_pods.go:61] "kube-scheduler-pause-145303" [6beef327-3e09-4660-9382-bf41be4f147d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:46:28.118380   47159 system_pods.go:74] duration metric: took 4.45137ms to wait for pod list to return data ...
	I1001 18:46:28.118393   47159 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:46:28.121273   47159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:46:28.121304   47159 node_conditions.go:123] node cpu capacity is 2
	I1001 18:46:28.121318   47159 node_conditions.go:105] duration metric: took 2.919981ms to run NodePressure ...
	I1001 18:46:28.121339   47159 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:28.402207   47159 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I1001 18:46:28.408769   47159 kubeadm.go:735] kubelet initialised
	I1001 18:46:28.408801   47159 kubeadm.go:736] duration metric: took 6.565578ms waiting for restarted kubelet to initialise ...
	I1001 18:46:28.408822   47159 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:46:28.439327   47159 ops.go:34] apiserver oom_adj: -16
	I1001 18:46:28.439357   47159 kubeadm.go:593] duration metric: took 29.806532375s to restartPrimaryControlPlane
	I1001 18:46:28.439373   47159 kubeadm.go:394] duration metric: took 29.91997513s to StartCluster
	I1001 18:46:28.439397   47159 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:28.439511   47159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:46:28.440746   47159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:28.441027   47159 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:46:28.441169   47159 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:46:28.441286   47159 config.go:182] Loaded profile config "pause-145303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:46:28.443422   47159 out.go:179] * Verifying Kubernetes components...
	I1001 18:46:28.443447   47159 out.go:179] * Enabled addons: 
	I1001 18:46:26.191083   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:26.191764   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:26.191817   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:26.192054   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:26.192091   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:26.192033   48098 retry.go:31] will retry after 2.307826778s: waiting for domain to come up
	I1001 18:46:28.501571   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:28.502332   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:28.502457   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:28.502629   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:28.502651   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:28.502621   48098 retry.go:31] will retry after 2.501019941s: waiting for domain to come up
	I1001 18:46:28.445174   47159 addons.go:514] duration metric: took 4.011181ms for enable addons: enabled=[]
	I1001 18:46:28.445194   47159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:46:28.744931   47159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:46:28.766509   47159 node_ready.go:35] waiting up to 6m0s for node "pause-145303" to be "Ready" ...
	I1001 18:46:28.771191   47159 node_ready.go:49] node "pause-145303" is "Ready"
	I1001 18:46:28.771227   47159 node_ready.go:38] duration metric: took 4.630608ms for node "pause-145303" to be "Ready" ...
	I1001 18:46:28.771244   47159 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:46:28.771296   47159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:28.802314   47159 api_server.go:72] duration metric: took 361.247747ms to wait for apiserver process to appear ...
	I1001 18:46:28.802348   47159 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:46:28.802376   47159 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1001 18:46:28.810512   47159 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1001 18:46:28.812254   47159 api_server.go:141] control plane version: v1.34.1
	I1001 18:46:28.812276   47159 api_server.go:131] duration metric: took 9.919066ms to wait for apiserver health ...
	I1001 18:46:28.812286   47159 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:46:28.815398   47159 system_pods.go:59] 6 kube-system pods found
	I1001 18:46:28.815443   47159 system_pods.go:61] "coredns-66bc5c9577-d67rw" [7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:46:28.815457   47159 system_pods.go:61] "etcd-pause-145303" [578fdb30-9e55-4726-9f91-5ff40e3d386e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:46:28.815467   47159 system_pods.go:61] "kube-apiserver-pause-145303" [558bec2a-25fc-43ec-af2f-1225c6d3a7ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:46:28.815478   47159 system_pods.go:61] "kube-controller-manager-pause-145303" [12407246-6706-4bd2-ad63-59d92ae8383d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:46:28.815483   47159 system_pods.go:61] "kube-proxy-wh8vc" [cd499d04-e196-4e19-ad92-5e1a46fc3d51] Running
	I1001 18:46:28.815491   47159 system_pods.go:61] "kube-scheduler-pause-145303" [6beef327-3e09-4660-9382-bf41be4f147d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:46:28.815498   47159 system_pods.go:74] duration metric: took 3.204867ms to wait for pod list to return data ...
	I1001 18:46:28.815509   47159 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:46:28.817485   47159 default_sa.go:45] found service account: "default"
	I1001 18:46:28.817513   47159 default_sa.go:55] duration metric: took 1.994161ms for default service account to be created ...
	I1001 18:46:28.817522   47159 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:46:28.821106   47159 system_pods.go:86] 6 kube-system pods found
	I1001 18:46:28.821139   47159 system_pods.go:89] "coredns-66bc5c9577-d67rw" [7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:46:28.821153   47159 system_pods.go:89] "etcd-pause-145303" [578fdb30-9e55-4726-9f91-5ff40e3d386e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:46:28.821165   47159 system_pods.go:89] "kube-apiserver-pause-145303" [558bec2a-25fc-43ec-af2f-1225c6d3a7ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:46:28.821181   47159 system_pods.go:89] "kube-controller-manager-pause-145303" [12407246-6706-4bd2-ad63-59d92ae8383d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:46:28.821187   47159 system_pods.go:89] "kube-proxy-wh8vc" [cd499d04-e196-4e19-ad92-5e1a46fc3d51] Running
	I1001 18:46:28.821195   47159 system_pods.go:89] "kube-scheduler-pause-145303" [6beef327-3e09-4660-9382-bf41be4f147d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:46:28.821204   47159 system_pods.go:126] duration metric: took 3.674121ms to wait for k8s-apps to be running ...
	I1001 18:46:28.821213   47159 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:46:28.821271   47159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:46:28.845720   47159 system_svc.go:56] duration metric: took 24.496985ms WaitForService to wait for kubelet
	I1001 18:46:28.845749   47159 kubeadm.go:578] duration metric: took 404.689803ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:46:28.845774   47159 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:46:28.851239   47159 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 18:46:28.851268   47159 node_conditions.go:123] node cpu capacity is 2
	I1001 18:46:28.851281   47159 node_conditions.go:105] duration metric: took 5.501492ms to run NodePressure ...
	I1001 18:46:28.851295   47159 start.go:241] waiting for startup goroutines ...
	I1001 18:46:28.851305   47159 start.go:246] waiting for cluster config update ...
	I1001 18:46:28.851321   47159 start.go:255] writing updated cluster config ...
	I1001 18:46:28.851760   47159 ssh_runner.go:195] Run: rm -f paused
	I1001 18:46:28.861402   47159 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:46:28.862246   47159 kapi.go:59] client config for pause-145303: &rest.Config{Host:"https://192.168.39.100:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/pause-145303/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:46:28.866255   47159 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d67rw" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:46:30.871932   47159 pod_ready.go:104] pod "coredns-66bc5c9577-d67rw" is not "Ready", error: <nil>
	I1001 18:46:27.189294   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:27.689384   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:28.189389   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:28.689707   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:29.189882   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:29.214614   47627 api_server.go:72] duration metric: took 3.025481973s to wait for apiserver process to appear ...
	I1001 18:46:29.214650   47627 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:46:29.214672   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:32.821134   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:46:32.821167   47627 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:46:32.821188   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:32.851990   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:46:32.852017   47627 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:46:33.215503   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:33.222588   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1001 18:46:33.222620   47627 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1001 18:46:33.715166   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:33.720295   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1001 18:46:33.720327   47627 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1001 18:46:34.215596   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:34.220445   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1001 18:46:34.227806   47627 api_server.go:141] control plane version: v1.28.3
	I1001 18:46:34.227829   47627 api_server.go:131] duration metric: took 5.013172839s to wait for apiserver health ...
	I1001 18:46:34.227838   47627 cni.go:84] Creating CNI manager for ""
	I1001 18:46:34.227844   47627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 18:46:34.229469   47627 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 18:46:34.230720   47627 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 18:46:34.241288   47627 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 18:46:34.260076   47627 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:46:34.266611   47627 system_pods.go:59] 5 kube-system pods found
	I1001 18:46:34.266656   47627 system_pods.go:61] "etcd-stopped-upgrade-149070" [557f9e6c-ef63-4d77-9d82-5e679eb18064] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:46:34.266668   47627 system_pods.go:61] "kube-apiserver-stopped-upgrade-149070" [9c77893c-289e-4b1f-be2c-ab035f8e27f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:46:34.266675   47627 system_pods.go:61] "kube-controller-manager-stopped-upgrade-149070" [0d761d04-9193-4d52-9512-ba1e5ebea7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:46:34.266681   47627 system_pods.go:61] "kube-scheduler-stopped-upgrade-149070" [6770f889-794c-4acb-af7e-33dd30c7c80b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:46:34.266686   47627 system_pods.go:61] "storage-provisioner" [c9285d07-e9fd-40eb-9964-90a0586bab9b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1001 18:46:34.266693   47627 system_pods.go:74] duration metric: took 6.596397ms to wait for pod list to return data ...
	I1001 18:46:34.266702   47627 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:46:34.269872   47627 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1001 18:46:34.269902   47627 node_conditions.go:123] node cpu capacity is 2
	I1001 18:46:34.269912   47627 node_conditions.go:105] duration metric: took 3.206069ms to run NodePressure ...
	I1001 18:46:34.269927   47627 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:46:34.464845   47627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:46:34.474450   47627 ops.go:34] apiserver oom_adj: -16
	I1001 18:46:34.474472   47627 kubeadm.go:593] duration metric: took 9.699767471s to restartPrimaryControlPlane
	I1001 18:46:34.474485   47627 kubeadm.go:394] duration metric: took 9.743178035s to StartCluster
	I1001 18:46:34.474505   47627 settings.go:142] acquiring lock: {Name:mk5d6ab23dfd36d7b84e4e5d63470620e0207b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:34.474587   47627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:46:34.475393   47627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/kubeconfig: {Name:mkccaec248bac902ba8059942e9729c12d140d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:34.475629   47627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:46:34.475717   47627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:46:34.475797   47627 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-149070"
	I1001 18:46:34.475814   47627 addons.go:238] Setting addon storage-provisioner=true in "stopped-upgrade-149070"
	I1001 18:46:34.475820   47627 config.go:182] Loaded profile config "stopped-upgrade-149070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	W1001 18:46:34.475823   47627 addons.go:247] addon storage-provisioner should already be in state true
	I1001 18:46:34.475829   47627 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-149070"
	I1001 18:46:34.475904   47627 host.go:66] Checking if "stopped-upgrade-149070" exists ...
	I1001 18:46:34.475922   47627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-149070"
	I1001 18:46:34.476249   47627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:34.476289   47627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:34.476351   47627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:34.476368   47627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:34.477586   47627 out.go:179] * Verifying Kubernetes components...
	I1001 18:46:34.477602   47627 out.go:179] * Creating mount /home/jenkins:/minikube-host ...
	I1001 18:46:34.478937   47627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:46:34.479297   47627 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/.mount-process: {Name:mke19776cf7fdda2f02e558a607bbd6cdea2f697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:46:34.493792   47627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1001 18:46:34.494268   47627 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:34.494634   47627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I1001 18:46:34.495013   47627 main.go:141] libmachine: Using API Version  1
	I1001 18:46:34.495042   47627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:34.495524   47627 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:34.495576   47627 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:34.495920   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetState
	I1001 18:46:34.496202   47627 main.go:141] libmachine: Using API Version  1
	I1001 18:46:34.496226   47627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:34.496607   47627 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:34.497180   47627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:34.497228   47627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:34.498857   47627 kapi.go:59] client config for stopped-upgrade-149070: &rest.Config{Host:"https://192.168.72.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/client.crt", KeyFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/profiles/stopped-upgrade-149070/client.key", CAFile:"/home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 18:46:34.499163   47627 addons.go:238] Setting addon default-storageclass=true in "stopped-upgrade-149070"
	W1001 18:46:34.499181   47627 addons.go:247] addon default-storageclass should already be in state true
	I1001 18:46:34.499208   47627 host.go:66] Checking if "stopped-upgrade-149070" exists ...
	I1001 18:46:34.499532   47627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:34.499578   47627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:34.514601   47627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1001 18:46:34.514946   47627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I1001 18:46:34.515139   47627 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:34.515590   47627 main.go:141] libmachine: Using API Version  1
	I1001 18:46:34.515610   47627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:34.515680   47627 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:34.516240   47627 main.go:141] libmachine: Using API Version  1
	I1001 18:46:34.516257   47627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:34.516577   47627 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:34.516678   47627 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:34.516858   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetState
	I1001 18:46:34.517169   47627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:34.517219   47627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:34.519341   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:34.521132   47627 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:46:31.004766   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:31.005527   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | no network interface addresses found for domain NoKubernetes-180525 (source=lease)
	I1001 18:46:31.005556   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | trying to list again with source=arp
	I1001 18:46:31.005809   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find current IP address of domain NoKubernetes-180525 in network mk-NoKubernetes-180525 (interfaces detected: [])
	I1001 18:46:31.005837   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | I1001 18:46:31.005795   48098 retry.go:31] will retry after 2.865008369s: waiting for domain to come up
	I1001 18:46:33.873062   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:33.873884   47871 main.go:141] libmachine: (NoKubernetes-180525) found domain IP: 192.168.50.196
	I1001 18:46:33.873911   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has current primary IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:33.873919   47871 main.go:141] libmachine: (NoKubernetes-180525) reserving static IP address...
	I1001 18:46:33.874602   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-180525", mac: "52:54:00:ed:0e:19", ip: "192.168.50.196"} in network mk-NoKubernetes-180525
	I1001 18:46:34.092834   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | Getting to WaitForSSH function...
	I1001 18:46:34.092863   47871 main.go:141] libmachine: (NoKubernetes-180525) reserved static IP address 192.168.50.196 for domain NoKubernetes-180525
	I1001 18:46:34.092875   47871 main.go:141] libmachine: (NoKubernetes-180525) waiting for SSH...
	I1001 18:46:34.096903   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.097420   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.097464   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.097702   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | Using SSH client type: external
	I1001 18:46:34.097742   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | Using SSH private key: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa (-rw-------)
	I1001 18:46:34.097788   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 18:46:34.097805   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | About to run SSH command:
	I1001 18:46:34.097825   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | exit 0
	I1001 18:46:34.234572   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | SSH cmd err, output: <nil>: 
	I1001 18:46:34.234847   47871 main.go:141] libmachine: (NoKubernetes-180525) domain creation complete
	I1001 18:46:34.235293   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetConfigRaw
	I1001 18:46:34.235943   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:34.236401   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:34.236631   47871 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 18:46:34.236653   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetState
	I1001 18:46:34.238158   47871 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 18:46:34.238170   47871 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 18:46:34.238175   47871 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 18:46:34.238180   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:34.241245   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.241731   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.241757   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.241943   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:34.242147   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.242334   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.242507   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:34.242697   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:34.242966   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:34.242981   47871 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 18:46:34.354141   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:46:34.354169   47871 main.go:141] libmachine: Detecting the provisioner...
	I1001 18:46:34.354182   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:34.357616   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.358136   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.358158   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.358419   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:34.358692   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.358897   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.359073   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:34.359276   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:34.359529   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:34.359549   47871 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 18:46:34.482842   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1001 18:46:34.482935   47871 main.go:141] libmachine: found compatible host: buildroot
	I1001 18:46:34.482945   47871 main.go:141] libmachine: Provisioning with buildroot...
	I1001 18:46:34.482955   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetMachineName
	I1001 18:46:34.483224   47871 buildroot.go:166] provisioning hostname "NoKubernetes-180525"
	I1001 18:46:34.483245   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetMachineName
	I1001 18:46:34.483454   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:34.487365   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.487916   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.487957   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.488183   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:34.488377   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.488591   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.488847   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:34.489068   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:34.489369   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:34.489392   47871 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-180525 && echo "NoKubernetes-180525" | sudo tee /etc/hostname
	I1001 18:46:34.632557   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-180525
	
	I1001 18:46:34.632591   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:34.636365   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.636810   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.636840   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.637091   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:34.637301   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.637471   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:34.637634   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:34.637828   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:34.638087   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:34.638106   47871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-180525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-180525/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-180525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:46:34.775325   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:46:34.775359   47871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21631-9542/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-9542/.minikube}
	I1001 18:46:34.775388   47871 buildroot.go:174] setting up certificates
	I1001 18:46:34.775403   47871 provision.go:84] configureAuth start
	I1001 18:46:34.775424   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetMachineName
	I1001 18:46:34.775755   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetIP
	I1001 18:46:34.779186   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.779609   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.779641   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.779924   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:34.782934   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.783356   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:34.783387   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:34.783524   47871 provision.go:143] copyHostCerts
	I1001 18:46:34.783594   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:46:34.783638   47871 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem, removing ...
	I1001 18:46:34.783661   47871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem
	I1001 18:46:34.783746   47871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/ca.pem (1082 bytes)
	I1001 18:46:34.783869   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:46:34.783893   47871 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem, removing ...
	I1001 18:46:34.783902   47871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem
	I1001 18:46:34.783946   47871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/cert.pem (1123 bytes)
	I1001 18:46:34.784031   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:46:34.784055   47871 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem, removing ...
	I1001 18:46:34.784064   47871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem
	I1001 18:46:34.784102   47871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-9542/.minikube/key.pem (1675 bytes)
	I1001 18:46:34.784193   47871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-180525 san=[127.0.0.1 192.168.50.196 NoKubernetes-180525 localhost minikube]
	I1001 18:46:35.786903   48033 start.go:364] duration metric: took 25.026181829s to acquireMachinesLock for "kubernetes-upgrade-130620"
	I1001 18:46:35.787052   48033 start.go:96] Skipping create...Using existing machine configuration
	I1001 18:46:35.787065   48033 fix.go:54] fixHost starting: 
	I1001 18:46:35.787526   48033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:46:35.787580   48033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:46:35.803785   48033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I1001 18:46:35.804267   48033 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:35.804913   48033 main.go:141] libmachine: Using API Version  1
	I1001 18:46:35.804942   48033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:35.805294   48033 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:35.805515   48033 main.go:141] libmachine: (kubernetes-upgrade-130620) Calling .DriverName
	I1001 18:46:35.805692   48033 main.go:141] libmachine: (kubernetes-upgrade-130620) Calling .GetState
	I1001 18:46:35.807478   48033 fix.go:112] recreateIfNeeded on kubernetes-upgrade-130620: state=Stopped err=<nil>
	I1001 18:46:35.807509   48033 main.go:141] libmachine: (kubernetes-upgrade-130620) Calling .DriverName
	W1001 18:46:35.807684   48033 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 18:46:34.522450   47627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:46:34.522470   47627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:46:34.522487   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:34.527613   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:34.528315   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:34.528395   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:34.529120   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:34.529373   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:34.529596   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:34.529752   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	I1001 18:46:34.536995   47627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I1001 18:46:34.537719   47627 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:46:34.538240   47627 main.go:141] libmachine: Using API Version  1
	I1001 18:46:34.538262   47627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:46:34.538670   47627 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:46:34.538891   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetState
	I1001 18:46:34.541306   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .DriverName
	I1001 18:46:34.541592   47627 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:46:34.541612   47627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:46:34.541631   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHHostname
	I1001 18:46:34.545206   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:34.545635   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:57:f6", ip: ""} in network mk-stopped-upgrade-149070: {Iface:virbr4 ExpiryTime:2025-10-01 19:46:12 +0000 UTC Type:0 Mac:52:54:00:97:57:f6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:stopped-upgrade-149070 Clientid:01:52:54:00:97:57:f6}
	I1001 18:46:34.545666   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | domain stopped-upgrade-149070 has defined IP address 192.168.72.10 and MAC address 52:54:00:97:57:f6 in network mk-stopped-upgrade-149070
	I1001 18:46:34.545797   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHPort
	I1001 18:46:34.545974   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHKeyPath
	I1001 18:46:34.546153   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .GetSSHUsername
	I1001 18:46:34.546342   47627 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/stopped-upgrade-149070/id_rsa Username:docker}
	I1001 18:46:34.638559   47627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:46:34.657978   47627 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:46:34.658056   47627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:46:34.669289   47627 api_server.go:72] duration metric: took 193.623475ms to wait for apiserver process to appear ...
	I1001 18:46:34.669314   47627 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:46:34.669333   47627 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1001 18:46:34.674313   47627 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1001 18:46:34.675669   47627 api_server.go:141] control plane version: v1.28.3
	I1001 18:46:34.675695   47627 api_server.go:131] duration metric: took 6.370971ms to wait for apiserver health ...
	I1001 18:46:34.675705   47627 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:46:34.679202   47627 system_pods.go:59] 5 kube-system pods found
	I1001 18:46:34.679237   47627 system_pods.go:61] "etcd-stopped-upgrade-149070" [557f9e6c-ef63-4d77-9d82-5e679eb18064] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:46:34.679260   47627 system_pods.go:61] "kube-apiserver-stopped-upgrade-149070" [9c77893c-289e-4b1f-be2c-ab035f8e27f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:46:34.679274   47627 system_pods.go:61] "kube-controller-manager-stopped-upgrade-149070" [0d761d04-9193-4d52-9512-ba1e5ebea7c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:46:34.679285   47627 system_pods.go:61] "kube-scheduler-stopped-upgrade-149070" [6770f889-794c-4acb-af7e-33dd30c7c80b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:46:34.679298   47627 system_pods.go:61] "storage-provisioner" [c9285d07-e9fd-40eb-9964-90a0586bab9b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1001 18:46:34.679308   47627 system_pods.go:74] duration metric: took 3.595224ms to wait for pod list to return data ...
	I1001 18:46:34.679325   47627 kubeadm.go:578] duration metric: took 203.662346ms to wait for: map[apiserver:true system_pods:true]
	I1001 18:46:34.679343   47627 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:46:34.681666   47627 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1001 18:46:34.681689   47627 node_conditions.go:123] node cpu capacity is 2
	I1001 18:46:34.681700   47627 node_conditions.go:105] duration metric: took 2.351756ms to run NodePressure ...
	I1001 18:46:34.681712   47627 start.go:241] waiting for startup goroutines ...
	I1001 18:46:34.723910   47627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:46:34.764007   47627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:46:35.586886   47627 main.go:141] libmachine: Making call to close driver server
	I1001 18:46:35.586916   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .Close
	I1001 18:46:35.587215   47627 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:46:35.587231   47627 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:46:35.587239   47627 main.go:141] libmachine: Making call to close driver server
	I1001 18:46:35.587251   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .Close
	I1001 18:46:35.587507   47627 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:46:35.587520   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Closing plugin on server side
	I1001 18:46:35.587525   47627 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:46:35.596309   47627 main.go:141] libmachine: Making call to close driver server
	I1001 18:46:35.596334   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .Close
	I1001 18:46:35.596717   47627 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:46:35.596738   47627 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:46:35.596753   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Closing plugin on server side
	I1001 18:46:35.864875   47627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.100816504s)
	I1001 18:46:35.864944   47627 main.go:141] libmachine: Making call to close driver server
	I1001 18:46:35.864958   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .Close
	I1001 18:46:35.865271   47627 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:46:35.865295   47627 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:46:35.865309   47627 main.go:141] libmachine: Making call to close driver server
	I1001 18:46:35.865318   47627 main.go:141] libmachine: (stopped-upgrade-149070) Calling .Close
	I1001 18:46:35.865573   47627 main.go:141] libmachine: (stopped-upgrade-149070) DBG | Closing plugin on server side
	I1001 18:46:35.865632   47627 main.go:141] libmachine: Successfully made call to close driver server
	I1001 18:46:35.865652   47627 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 18:46:35.868607   47627 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1001 18:46:35.869768   47627 addons.go:514] duration metric: took 1.394055181s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 18:46:35.869806   47627 start.go:246] waiting for cluster config update ...
	I1001 18:46:35.869821   47627 start.go:255] writing updated cluster config ...
	I1001 18:46:35.870108   47627 ssh_runner.go:195] Run: rm -f paused
	I1001 18:46:35.922877   47627 start.go:620] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1001 18:46:35.924596   47627 out.go:203] 
	W1001 18:46:35.925889   47627 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1001 18:46:35.927136   47627 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1001 18:46:35.928451   47627 out.go:179] * Done! kubectl is now configured to use "stopped-upgrade-149070" cluster and "default" namespace by default
	W1001 18:46:32.872474   47159 pod_ready.go:104] pod "coredns-66bc5c9577-d67rw" is not "Ready", error: <nil>
	I1001 18:46:34.374193   47159 pod_ready.go:94] pod "coredns-66bc5c9577-d67rw" is "Ready"
	I1001 18:46:34.374223   47159 pod_ready.go:86] duration metric: took 5.507941263s for pod "coredns-66bc5c9577-d67rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:34.377393   47159 pod_ready.go:83] waiting for pod "etcd-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:35.031343   47871 provision.go:177] copyRemoteCerts
	I1001 18:46:35.031413   47871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:46:35.031450   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.034143   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.034574   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.034611   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.034841   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.035073   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.035227   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.035372   47871 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa Username:docker}
	I1001 18:46:35.124999   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 18:46:35.125079   47871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:46:35.155050   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 18:46:35.155107   47871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 18:46:35.187228   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 18:46:35.187322   47871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:46:35.218700   47871 provision.go:87] duration metric: took 443.278119ms to configureAuth
	I1001 18:46:35.218733   47871 buildroot.go:189] setting minikube options for container-runtime
	I1001 18:46:35.218950   47871 config.go:182] Loaded profile config "NoKubernetes-180525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1001 18:46:35.219045   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.222550   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.222932   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.222963   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.223220   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.223451   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.223645   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.223789   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.224018   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:35.224226   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:35.224242   47871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:46:35.505486   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:46:35.505526   47871 main.go:141] libmachine: Checking connection to Docker...
	I1001 18:46:35.505538   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetURL
	I1001 18:46:35.506997   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | using libvirt version 8000000
	I1001 18:46:35.510564   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.511024   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.511046   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.511255   47871 main.go:141] libmachine: Docker is up and running!
	I1001 18:46:35.511270   47871 main.go:141] libmachine: Reticulating splines...
	I1001 18:46:35.511278   47871 client.go:171] duration metric: took 18.048399412s to LocalClient.Create
	I1001 18:46:35.511304   47871 start.go:167] duration metric: took 18.048476984s to libmachine.API.Create "NoKubernetes-180525"
	I1001 18:46:35.511317   47871 start.go:293] postStartSetup for "NoKubernetes-180525" (driver="kvm2")
	I1001 18:46:35.511329   47871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:46:35.511354   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:35.511651   47871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:46:35.511675   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.514497   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.514916   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.514944   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.515147   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.515353   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.515575   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.515770   47871 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa Username:docker}
	I1001 18:46:35.604746   47871 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:46:35.611577   47871 info.go:137] Remote host: Buildroot 2025.02
	I1001 18:46:35.611618   47871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/addons for local assets ...
	I1001 18:46:35.611689   47871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-9542/.minikube/files for local assets ...
	I1001 18:46:35.611797   47871 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> 134692.pem in /etc/ssl/certs
	I1001 18:46:35.611810   47871 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem -> /etc/ssl/certs/134692.pem
	I1001 18:46:35.611930   47871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 18:46:35.624540   47871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/ssl/certs/134692.pem --> /etc/ssl/certs/134692.pem (1708 bytes)
	I1001 18:46:35.659818   47871 start.go:296] duration metric: took 148.485416ms for postStartSetup
	I1001 18:46:35.659868   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetConfigRaw
	I1001 18:46:35.660533   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetIP
	I1001 18:46:35.663772   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.664243   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.664272   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.664647   47871 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/NoKubernetes-180525/config.json ...
	I1001 18:46:35.664844   47871 start.go:128] duration metric: took 18.220980568s to createHost
	I1001 18:46:35.664867   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.667697   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.668182   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.668206   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.668459   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.668660   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.668830   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.668998   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.669199   47871 main.go:141] libmachine: Using SSH client type: native
	I1001 18:46:35.669519   47871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1001 18:46:35.669540   47871 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 18:46:35.786738   47871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759344395.744194636
	
	I1001 18:46:35.786763   47871 fix.go:216] guest clock: 1759344395.744194636
	I1001 18:46:35.786773   47871 fix.go:229] Guest: 2025-10-01 18:46:35.744194636 +0000 UTC Remote: 2025-10-01 18:46:35.664854659 +0000 UTC m=+30.726254383 (delta=79.339977ms)
	I1001 18:46:35.786798   47871 fix.go:200] guest clock delta is within tolerance: 79.339977ms
	I1001 18:46:35.786804   47871 start.go:83] releasing machines lock for "NoKubernetes-180525", held for 18.343092158s
	I1001 18:46:35.786832   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:35.787112   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetIP
	I1001 18:46:35.790614   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.791118   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.791150   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.791322   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:35.791909   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:35.792093   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .DriverName
	I1001 18:46:35.792204   47871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:46:35.792245   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.792304   47871 ssh_runner.go:195] Run: cat /version.json
	I1001 18:46:35.792333   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHHostname
	I1001 18:46:35.796029   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.796070   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.796478   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.796499   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.796713   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:19", ip: ""} in network mk-NoKubernetes-180525: {Iface:virbr3 ExpiryTime:2025-10-01 19:46:33 +0000 UTC Type:0 Mac:52:54:00:ed:0e:19 Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:nokubernetes-180525 Clientid:01:52:54:00:ed:0e:19}
	I1001 18:46:35.796731   47871 main.go:141] libmachine: (NoKubernetes-180525) DBG | domain NoKubernetes-180525 has defined IP address 192.168.50.196 and MAC address 52:54:00:ed:0e:19 in network mk-NoKubernetes-180525
	I1001 18:46:35.797044   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.797096   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHPort
	I1001 18:46:35.797299   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.797315   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHKeyPath
	I1001 18:46:35.797507   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.797553   47871 main.go:141] libmachine: (NoKubernetes-180525) Calling .GetSSHUsername
	I1001 18:46:35.797670   47871 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa Username:docker}
	I1001 18:46:35.797661   47871 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/NoKubernetes-180525/id_rsa Username:docker}
	I1001 18:46:35.886082   47871 ssh_runner.go:195] Run: systemctl --version
	I1001 18:46:35.920014   47871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:46:36.089708   47871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 18:46:36.098870   47871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 18:46:36.098989   47871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:46:36.121726   47871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 18:46:36.121745   47871 start.go:495] detecting cgroup driver to use...
	I1001 18:46:36.121818   47871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:46:36.143216   47871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:46:36.162660   47871 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:46:36.162728   47871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:46:36.181946   47871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:46:36.205639   47871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:46:36.398219   47871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:46:36.653401   47871 docker.go:234] disabling docker service ...
	I1001 18:46:36.653495   47871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:46:36.671886   47871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:46:36.686628   47871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:46:36.892370   47871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:46:37.089984   47871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:46:37.118498   47871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:46:37.150209   47871 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21631-9542/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1001 18:46:37.687105   47871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1001 18:46:37.687161   47871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:37.700751   47871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:46:37.700837   47871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:37.714334   47871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:37.729506   47871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:46:37.744690   47871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:46:37.760691   47871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:46:37.772419   47871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 18:46:37.772527   47871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 18:46:37.794969   47871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:46:37.807462   47871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:46:37.969619   47871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:46:38.083326   47871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:46:38.083410   47871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:46:38.089536   47871 start.go:563] Will wait 60s for crictl version
	I1001 18:46:38.089636   47871 ssh_runner.go:195] Run: which crictl
	I1001 18:46:38.093995   47871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:46:38.141939   47871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 18:46:38.142037   47871 ssh_runner.go:195] Run: crio --version
	I1001 18:46:38.182817   47871 ssh_runner.go:195] Run: crio --version
	I1001 18:46:38.226215   47871 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1001 18:46:38.227617   47871 ssh_runner.go:195] Run: rm -f paused
	W1001 18:46:36.385047   47159 pod_ready.go:104] pod "etcd-pause-145303" is not "Ready", error: <nil>
	I1001 18:46:37.383451   47159 pod_ready.go:94] pod "etcd-pause-145303" is "Ready"
	I1001 18:46:37.383483   47159 pod_ready.go:86] duration metric: took 3.006058722s for pod "etcd-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.386446   47159 pod_ready.go:83] waiting for pod "kube-apiserver-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.391363   47159 pod_ready.go:94] pod "kube-apiserver-pause-145303" is "Ready"
	I1001 18:46:37.391391   47159 pod_ready.go:86] duration metric: took 4.917562ms for pod "kube-apiserver-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.393661   47159 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.397681   47159 pod_ready.go:94] pod "kube-controller-manager-pause-145303" is "Ready"
	I1001 18:46:37.397709   47159 pod_ready.go:86] duration metric: took 4.013092ms for pod "kube-controller-manager-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.399695   47159 pod_ready.go:83] waiting for pod "kube-proxy-wh8vc" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.582103   47159 pod_ready.go:94] pod "kube-proxy-wh8vc" is "Ready"
	I1001 18:46:37.582142   47159 pod_ready.go:86] duration metric: took 182.423756ms for pod "kube-proxy-wh8vc" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:37.782055   47159 pod_ready.go:83] waiting for pod "kube-scheduler-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:38.182540   47159 pod_ready.go:94] pod "kube-scheduler-pause-145303" is "Ready"
	I1001 18:46:38.182572   47159 pod_ready.go:86] duration metric: took 400.488924ms for pod "kube-scheduler-pause-145303" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:46:38.182587   47159 pod_ready.go:40] duration metric: took 9.321131034s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:46:38.233963   47159 start.go:620] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1001 18:46:38.234582   47871 out.go:179] * Done! minikube is ready without Kubernetes!
	I1001 18:46:38.235338   47159 out.go:179] * Done! kubectl is now configured to use "pause-145303" cluster and "default" namespace by default
	I1001 18:46:38.239013   47871 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	E1001 18:46:38.247551   47871 logFile.go:53] failed to close the audit log: invalid argument
	W1001 18:46:38.247575   47871 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"b09d8796-172e-4424-8a77-fde9a397970f\",\"source\":\"https://minikube.sigs.k8s.io/\",\"type\":\"io.k8s.sigs.minikube.audit\",\"datacontenttype\":\"application/json\",\"data\":{\"args\":\"multinode-388877 cp testdata/cp-test.txt multinode-388877-m03:/home/docker/cp-test.txt\",\"command\":\"cp\",\"endTime\":\"01 Oct 25 18:28 UTC\",\"id\":\"ed9f8409-0a03-494b-a081-baeadfe2207a\",\"profile\":\"multinode-388877\",\"": unexpected end of JSON input
	
	
	==> CRI-O <==
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.037205791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344399037184302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c414758-7145-4834-9444-7ac2c2588190 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.038248451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ce0cde5-35ac-45d5-8762-94a39964c63d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.038508549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ce0cde5-35ac-45d5-8762-94a39964c63d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.039272077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ce0cde5-35ac-45d5-8762-94a39964c63d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.090007442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8122ca5-f4bc-4769-b53e-c2742b73fd79 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.090105310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8122ca5-f4bc-4769-b53e-c2742b73fd79 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.091749201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d50af3d-9980-4911-9c0e-28b470b9f643 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.092698110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344399092672807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d50af3d-9980-4911-9c0e-28b470b9f643 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.093239330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c593c85-9c41-4253-a772-45e5f2305b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.093334239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c593c85-9c41-4253-a772-45e5f2305b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.093682526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c593c85-9c41-4253-a772-45e5f2305b0f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.157406654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38667311-d6b6-4615-8ae3-0db1c8d925d4 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.157497652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38667311-d6b6-4615-8ae3-0db1c8d925d4 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.158799706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=941165f4-f16a-400d-a78b-a4bf6efec50d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.159442716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344399159410442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=941165f4-f16a-400d-a78b-a4bf6efec50d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.160228261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf771655-e46a-4b81-891b-ac70a700db66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.160327842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf771655-e46a-4b81-891b-ac70a700db66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.160795849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf771655-e46a-4b81-891b-ac70a700db66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.221773623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5b5064b-c5eb-4a18-80ef-884a519bf075 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.221918618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5b5064b-c5eb-4a18-80ef-884a519bf075 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.223969404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d55c6f7-d3ab-4b47-b56c-7031ed278fa1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.224877736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344399224815263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d55c6f7-d3ab-4b47-b56c-7031ed278fa1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.225742900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b274db0-48ee-4fb4-baed-a1b9438053c2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.225892934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b274db0-48ee-4fb4-baed-a1b9438053c2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:39 pause-145303 crio[2560]: time="2025-10-01 18:46:39.226435691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b274db0-48ee-4fb4-baed-a1b9438053c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4df559882bb9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   12 seconds ago      Running             kube-proxy                2                   91a76775c46a3       kube-proxy-wh8vc
	0ad41df0fc5c7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   2                   5e80eb4ff9530       coredns-66bc5c9577-d67rw
	00d53babc9f40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   16 seconds ago      Running             kube-apiserver            2                   c454214201261       kube-apiserver-pause-145303
	dcd2214b9ae79       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   16 seconds ago      Running             etcd                      2                   44bd58df1c52b       etcd-pause-145303
	54ee5f9edf306       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   16 seconds ago      Running             kube-scheduler            2                   428515ebfeb7d       kube-scheduler-pause-145303
	288990005b545       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago      Running             kube-controller-manager   2                   aa49b1affc3e9       kube-controller-manager-pause-145303
	7e9238f55b3f9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   41 seconds ago      Exited              coredns                   1                   5e80eb4ff9530       coredns-66bc5c9577-d67rw
	51448fb688338       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   41 seconds ago      Exited              kube-proxy                1                   91a76775c46a3       kube-proxy-wh8vc
	18b4c0db76fd8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   41 seconds ago      Exited              kube-scheduler            1                   428515ebfeb7d       kube-scheduler-pause-145303
	1fc9354609697       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   42 seconds ago      Exited              kube-controller-manager   1                   aa49b1affc3e9       kube-controller-manager-pause-145303
	c2a573a84aa07       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   42 seconds ago      Exited              kube-apiserver            1                   c454214201261       kube-apiserver-pause-145303
	383b445295ebf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   42 seconds ago      Exited              etcd                      1                   44bd58df1c52b       etcd-pause-145303
	
	
	==> coredns [0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 19697 "HINFO IN 8657941590688261134.4338020412912103796. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.162680004s
	
	
	==> coredns [7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33119 - 59158 "HINFO IN 1206841871795915647.272757696560402739. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.115590619s
	
	
	==> describe nodes <==
	Name:               pause-145303
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-145303
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=pause-145303
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T18_44_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 18:44:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-145303
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 18:46:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    pause-145303
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 43501cd7ff6b4745a769dcb0ca4ca74a
	  System UUID:                43501cd7-ff6b-4745-a769-dcb0ca4ca74a
	  Boot ID:                    16c06935-5dff-4035-a28a-12e1ef3d5586
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-d67rw                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     110s
	  kube-system                 etcd-pause-145303                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         117s
	  kube-system                 kube-apiserver-pause-145303             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-pause-145303    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-wh8vc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-pause-145303             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 11s                  kube-proxy       
	  Normal  Starting                 37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeReady                114s                 kubelet          Node pause-145303 status is now: NodeReady
	  Normal  RegisteredNode           111s                 node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	  Normal  RegisteredNode           34s                  node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	  Normal  NodeHasNoDiskPressure    17s (x8 over 17s)    kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17s (x8 over 17s)    kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     17s (x7 over 17s)    kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                  node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	
	
	==> dmesg <==
	[Oct 1 18:44] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001565] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000110] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.182382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109825] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103580] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.132126] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 1 18:45] kauditd_printk_skb: 190 callbacks suppressed
	[Oct 1 18:46] kauditd_printk_skb: 297 callbacks suppressed
	[  +3.233597] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.548806] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.070619] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.400719] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6] <==
	{"level":"warn","ts":"2025-10-01T18:46:00.711669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.734663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.748550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.769441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.786923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.860954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	2025/10/01 18:46:09 WARNING: [core] [Server #4]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-10-01T18:46:19.411801Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-01T18:46:19.412083Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-145303","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"error","ts":"2025-10-01T18:46:19.412234Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:46:19.414354Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:46:19.414787Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.415513Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"warn","ts":"2025-10-01T18:46:19.415916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:46:19.416048Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:46:19.416217Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.100:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.416411Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-01T18:46:19.416544Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-01T18:46:19.417227Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:46:19.417351Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:46:19.417470Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.425260Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"error","ts":"2025-10-01T18:46:19.427111Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.100:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.427411Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2025-10-01T18:46:19.427518Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-145303","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3] <==
	{"level":"warn","ts":"2025-10-01T18:46:26.632333Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.093195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-01T18:46:26.632383Z","caller":"traceutil/trace.go:172","msg":"trace[703385182] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:536; }","duration":"158.16095ms","start":"2025-10-01T18:46:26.474214Z","end":"2025-10-01T18:46:26.632375Z","steps":["trace[703385182] 'agreement among raft nodes before linearized reading'  (duration: 158.030266ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.886198Z","caller":"traceutil/trace.go:172","msg":"trace[1095208122] linearizableReadLoop","detail":"{readStateIndex:573; appliedIndex:573; }","duration":"254.021127ms","start":"2025-10-01T18:46:26.632161Z","end":"2025-10-01T18:46:26.886182Z","steps":["trace[1095208122] 'read index received'  (duration: 254.016018ms)","trace[1095208122] 'applied index is now lower than readState.Index'  (duration: 4.372µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T18:46:26.887332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"413.064098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-01T18:46:26.887388Z","caller":"traceutil/trace.go:172","msg":"trace[1985169609] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:536; }","duration":"413.136334ms","start":"2025-10-01T18:46:26.474243Z","end":"2025-10-01T18:46:26.887379Z","steps":["trace[1985169609] 'agreement among raft nodes before linearized reading'  (duration: 412.010906ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.887431Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.474238Z","time spent":"413.185185ms","remote":"127.0.0.1:36032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":201,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-10-01T18:46:26.887654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"398.533503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T18:46:26.887746Z","caller":"traceutil/trace.go:172","msg":"trace[2124331021] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:536; }","duration":"398.67259ms","start":"2025-10-01T18:46:26.489062Z","end":"2025-10-01T18:46:26.887735Z","steps":["trace[2124331021] 'agreement among raft nodes before linearized reading'  (duration: 397.18084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.887899Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.489051Z","time spent":"398.720504ms","remote":"127.0.0.1:35886","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-10-01T18:46:26.887912Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.116434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wh8vc\" limit:1 ","response":"range_response_count:1 size:5389"}
	{"level":"info","ts":"2025-10-01T18:46:26.887935Z","caller":"traceutil/trace.go:172","msg":"trace[614680544] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wh8vc; range_end:; response_count:1; response_revision:538; }","duration":"254.141422ms","start":"2025-10-01T18:46:26.633787Z","end":"2025-10-01T18:46:26.887929Z","steps":["trace[614680544] 'agreement among raft nodes before linearized reading'  (duration: 254.021591ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.888161Z","caller":"traceutil/trace.go:172","msg":"trace[1295107224] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"407.08459ms","start":"2025-10-01T18:46:26.481069Z","end":"2025-10-01T18:46:26.888153Z","steps":["trace[1295107224] 'process raft request'  (duration: 405.16838ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888414Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.490521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-d67rw\" limit:1 ","response":"range_response_count:1 size:5844"}
	{"level":"info","ts":"2025-10-01T18:46:26.888441Z","caller":"traceutil/trace.go:172","msg":"trace[378514300] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-d67rw; range_end:; response_count:1; response_revision:538; }","duration":"251.522669ms","start":"2025-10-01T18:46:26.636911Z","end":"2025-10-01T18:46:26.888434Z","steps":["trace[378514300] 'agreement among raft nodes before linearized reading'  (duration: 251.434374ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.888575Z","caller":"traceutil/trace.go:172","msg":"trace[359360957] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"375.046548ms","start":"2025-10-01T18:46:26.513520Z","end":"2025-10-01T18:46:26.888566Z","steps":["trace[359360957] 'process raft request'  (duration: 374.228467ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888641Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.481052Z","time spent":"407.143892ms","remote":"127.0.0.1:36134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-145303\" mod_revision:449 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-145303\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-145303\" > >"}
	{"level":"warn","ts":"2025-10-01T18:46:26.888676Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.513503Z","time spent":"375.134794ms","remote":"127.0.0.1:36134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" mod_revision:446 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" > >"}
	{"level":"info","ts":"2025-10-01T18:46:26.888738Z","caller":"traceutil/trace.go:172","msg":"trace[1764461780] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"353.203654ms","start":"2025-10-01T18:46:26.535529Z","end":"2025-10-01T18:46:26.888732Z","steps":["trace[1764461780] 'process raft request'  (duration: 352.259753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888764Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.535510Z","time spent":"353.240023ms","remote":"127.0.0.1:35986","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/minions/pause-145303\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-145303\" value_size:3846 >> failure:<>"}
	{"level":"info","ts":"2025-10-01T18:46:27.034660Z","caller":"traceutil/trace.go:172","msg":"trace[1774351475] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:576; }","duration":"132.393886ms","start":"2025-10-01T18:46:26.902242Z","end":"2025-10-01T18:46:27.034636Z","steps":["trace[1774351475] 'read index received'  (duration: 132.386932ms)","trace[1774351475] 'applied index is now lower than readState.Index'  (duration: 6.015µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-01T18:46:27.035133Z","caller":"traceutil/trace.go:172","msg":"trace[774360137] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"133.371803ms","start":"2025-10-01T18:46:26.901747Z","end":"2025-10-01T18:46:27.035118Z","steps":["trace[774360137] 'process raft request'  (duration: 133.010746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:27.035525Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.26241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-145303\" limit:1 ","response":"range_response_count:1 size:5280"}
	{"level":"info","ts":"2025-10-01T18:46:27.036979Z","caller":"traceutil/trace.go:172","msg":"trace[230917304] range","detail":"{range_begin:/registry/minions/pause-145303; range_end:; response_count:1; response_revision:538; }","duration":"134.724358ms","start":"2025-10-01T18:46:26.902239Z","end":"2025-10-01T18:46:27.036963Z","steps":["trace[230917304] 'agreement among raft nodes before linearized reading'  (duration: 132.472392ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:27.040101Z","caller":"traceutil/trace.go:172","msg":"trace[2020945473] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"130.790182ms","start":"2025-10-01T18:46:26.909299Z","end":"2025-10-01T18:46:27.040089Z","steps":["trace[2020945473] 'process raft request'  (duration: 130.566471ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:27.040392Z","caller":"traceutil/trace.go:172","msg":"trace[1554902488] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"138.400395ms","start":"2025-10-01T18:46:26.901978Z","end":"2025-10-01T18:46:27.040379Z","steps":["trace[1554902488] 'process raft request'  (duration: 137.471629ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:46:39 up 2 min,  0 users,  load average: 0.94, 0.43, 0.16
	Linux pause-145303 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22] <==
	I1001 18:46:26.389322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 18:46:26.402927       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1001 18:46:26.410794       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1001 18:46:26.410939       1 aggregator.go:171] initial CRD sync complete...
	I1001 18:46:26.410988       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 18:46:26.411019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 18:46:26.411039       1 cache.go:39] Caches are synced for autoregister controller
	I1001 18:46:26.425537       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1001 18:46:26.430326       1 policy_source.go:240] refreshing policies
	I1001 18:46:26.468453       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 18:46:26.473657       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1001 18:46:26.474804       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 18:46:26.475053       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 18:46:26.475994       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1001 18:46:26.480569       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 18:46:26.633100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1001 18:46:27.320799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 18:46:27.824793       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100]
	I1001 18:46:27.827018       1 controller.go:667] quota admission added evaluator for: endpoints
	I1001 18:46:27.837515       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 18:46:28.256039       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1001 18:46:28.298771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1001 18:46:28.336589       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 18:46:28.345295       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 18:46:33.969518       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882] <==
	I1001 18:46:09.268960       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1001 18:46:09.268981       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1001 18:46:09.268996       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1001 18:46:09.269009       1 controller.go:132] Ending legacy_token_tracking_controller
	I1001 18:46:09.269013       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1001 18:46:09.269405       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1001 18:46:09.269461       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1001 18:46:09.269475       1 establishing_controller.go:92] Shutting down EstablishingController
	I1001 18:46:09.269502       1 naming_controller.go:310] Shutting down NamingConditionController
	I1001 18:46:09.269516       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1001 18:46:09.269537       1 controller.go:170] Shutting down OpenAPI controller
	I1001 18:46:09.269549       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1001 18:46:09.269758       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1001 18:46:09.271975       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 18:46:09.272106       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 18:46:09.272595       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1001 18:46:09.272618       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1001 18:46:09.272649       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 18:46:09.272878       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1001 18:46:09.272957       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1001 18:46:09.273077       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1001 18:46:09.273162       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1001 18:46:09.273188       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 18:46:09.273403       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1001 18:46:09.273594       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a] <==
	I1001 18:46:05.194657       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1001 18:46:05.194663       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1001 18:46:05.195902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1001 18:46:05.196003       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1001 18:46:05.199170       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1001 18:46:05.203518       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1001 18:46:05.210882       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1001 18:46:05.210891       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:46:05.214289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1001 18:46:05.214355       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1001 18:46:05.215651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1001 18:46:05.215718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1001 18:46:05.216766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1001 18:46:05.217946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1001 18:46:05.220419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1001 18:46:05.222800       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1001 18:46:05.222965       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 18:46:05.223077       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-145303"
	I1001 18:46:05.223128       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 18:46:05.224285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1001 18:46:05.226741       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1001 18:46:05.226882       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:46:05.227700       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1001 18:46:05.231511       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1001 18:46:05.232805       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481] <==
	I1001 18:46:29.816311       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1001 18:46:29.817941       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1001 18:46:29.817952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1001 18:46:29.817962       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1001 18:46:29.820711       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1001 18:46:29.820721       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1001 18:46:29.824032       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:46:29.828765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1001 18:46:29.835452       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1001 18:46:29.837732       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:46:29.839069       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1001 18:46:29.851973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:46:29.852050       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1001 18:46:29.852076       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 18:46:29.855233       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1001 18:46:29.855298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1001 18:46:29.855360       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1001 18:46:29.855466       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1001 18:46:29.855582       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 18:46:29.855610       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1001 18:46:29.855706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-145303"
	I1001 18:46:29.855936       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 18:46:29.855601       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1001 18:46:29.855786       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1001 18:46:29.865303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea] <==
	I1001 18:45:58.825878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:46:01.927607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:46:01.927738       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1001 18:46:01.928050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:46:01.996077       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1001 18:46:01.997007       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:46:01.997106       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:46:02.204036       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:46:02.204663       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:46:02.204760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:02.211311       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:46:02.211488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:46:02.212115       1 config.go:200] "Starting service config controller"
	I1001 18:46:02.212187       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:46:02.220928       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:46:02.220955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:46:02.228696       1 config.go:309] "Starting node config controller"
	I1001 18:46:02.228724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:46:02.228874       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:46:02.312279       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1001 18:46:02.312367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:46:02.321709       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569] <==
	I1001 18:46:27.442637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:46:27.543469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:46:27.543526       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1001 18:46:27.543596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:46:27.616265       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1001 18:46:27.616384       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:46:27.616511       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:46:27.640690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:46:27.641103       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:46:27.641117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:27.647087       1 config.go:200] "Starting service config controller"
	I1001 18:46:27.647109       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:46:27.647136       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:46:27.647143       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:46:27.647159       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:46:27.650112       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:46:27.650329       1 config.go:309] "Starting node config controller"
	I1001 18:46:27.650351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:46:27.650358       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:46:27.747752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1001 18:46:27.747888       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:46:27.750421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6] <==
	I1001 18:45:59.928910       1 serving.go:386] Generated self-signed cert in-memory
	I1001 18:46:02.914039       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:46:02.914191       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:02.924366       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:46:02.924376       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1001 18:46:02.924481       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1001 18:46:02.924529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:46:02.924543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:02.924555       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:02.924571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:02.924580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:03.025124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:03.025199       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1001 18:46:03.025390       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:08.926099       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1001 18:46:08.926127       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1001 18:46:08.926146       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 18:46:08.926168       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:08.926743       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1001 18:46:08.926949       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:08.927281       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1001 18:46:08.927376       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1] <==
	I1001 18:46:24.204504       1 serving.go:386] Generated self-signed cert in-memory
	W1001 18:46:26.330087       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 18:46:26.330190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 18:46:26.330233       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 18:46:26.330255       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 18:46:26.416758       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:46:26.418623       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:26.421169       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:26.421237       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:26.421526       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:46:26.421601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:46:26.522142       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 01 18:46:25 pause-145303 kubelet[3747]: E1001 18:46:25.667232    3747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145303\" not found" node="pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: E1001 18:46:26.013129    3747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145303\" not found" node="pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.353995    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.421604    3747 apiserver.go:52] "Watching apiserver"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.458110    3747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.470717    3747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd499d04-e196-4e19-ad92-5e1a46fc3d51-xtables-lock\") pod \"kube-proxy-wh8vc\" (UID: \"cd499d04-e196-4e19-ad92-5e1a46fc3d51\") " pod="kube-system/kube-proxy-wh8vc"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.471001    3747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd499d04-e196-4e19-ad92-5e1a46fc3d51-lib-modules\") pod \"kube-proxy-wh8vc\" (UID: \"cd499d04-e196-4e19-ad92-5e1a46fc3d51\") " pod="kube-system/kube-proxy-wh8vc"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.668938    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.035924    3747 scope.go:117] "RemoveContainer" containerID="51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.036719    3747 scope.go:117] "RemoveContainer" containerID="7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.044294    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-145303\" already exists" pod="kube-system/etcd-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.044314    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.051915    3747 kubelet_node_status.go:124] "Node was previously registered" node="pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.054562    3747 kubelet_node_status.go:78] "Successfully registered node" node="pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.054620    3747 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.055372    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145303\" already exists" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.060351    3747 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.072511    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-145303\" already exists" pod="kube-system/kube-apiserver-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.072617    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.106053    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-145303\" already exists" pod="kube-system/kube-controller-manager-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.106158    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.169370    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145303\" already exists" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:32 pause-145303 kubelet[3747]: E1001 18:46:32.625357    3747 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344392624901865  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 18:46:32 pause-145303 kubelet[3747]: E1001 18:46:32.625399    3747 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344392624901865  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 18:46:33 pause-145303 kubelet[3747]: I1001 18:46:33.931182    3747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145303 -n pause-145303
helpers_test.go:269: (dbg) Run:  kubectl --context pause-145303 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-145303 -n pause-145303
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-145303 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-145303 logs -n 25: (1.378336432s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --cancel-scheduled                                                                                                                        │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ stop    │ -p scheduled-stop-899156 --schedule 15s                                                                                                                            │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ delete  │ -p scheduled-stop-899156                                                                                                                                           │ scheduled-stop-899156     │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:44 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │                     │
	│ start   │ -p pause-145303 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-145303              │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p offline-crio-136397 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-136397       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p NoKubernetes-180525 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p stopped-upgrade-149070 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-149070    │ jenkins │ v1.32.0 │ 01 Oct 25 18:44 UTC │ 01 Oct 25 18:45 UTC │
	│ delete  │ -p offline-crio-136397                                                                                                                                             │ offline-crio-136397       │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p pause-145303 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-145303              │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ stop    │ stopped-upgrade-149070 stop                                                                                                                                        │ stopped-upgrade-149070    │ jenkins │ v1.32.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:45 UTC │
	│ start   │ -p stopped-upgrade-149070 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:45 UTC │ 01 Oct 25 18:46 UTC │
	│ delete  │ -p NoKubernetes-180525                                                                                                                                             │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-130620                                                                                                                                       │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ start   │ -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-130620 │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-149070 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ delete  │ -p stopped-upgrade-149070                                                                                                                                          │ stopped-upgrade-149070    │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │ 01 Oct 25 18:46 UTC │
	│ ssh     │ -p NoKubernetes-180525 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-180525       │ jenkins │ v1.37.0 │ 01 Oct 25 18:46 UTC │                     │
	│ start   │ -p running-upgrade-857786 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-857786    │ jenkins │ v1.32.0 │ 01 Oct 25 18:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:46:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:46:40.917399   48832 out.go:296] Setting OutFile to fd 1 ...
	I1001 18:46:40.917788   48832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1001 18:46:40.917794   48832 out.go:309] Setting ErrFile to fd 2...
	I1001 18:46:40.917801   48832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1001 18:46:40.918138   48832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:46:40.918862   48832 out.go:303] Setting JSON to false
	I1001 18:46:40.919924   48832 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5345,"bootTime":1759339056,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:46:40.919981   48832 start.go:138] virtualization: kvm guest
	I1001 18:46:40.922149   48832 out.go:177] * [running-upgrade-857786] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:46:40.923380   48832 out.go:177]   - MINIKUBE_LOCATION=21631
	I1001 18:46:40.923390   48832 notify.go:220] Checking for updates...
	I1001 18:46:40.925146   48832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:46:40.926552   48832 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:46:40.928071   48832 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:46:40.929510   48832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:46:40.930745   48832 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1725856621
	
	
	==> CRI-O <==
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.300040396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344401300007436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec7d1631-64e4-4f53-8a5c-9aa9d79d5706 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.300804478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0661a956-7be6-424c-83ef-dd7a4e5aba87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.301133347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0661a956-7be6-424c-83ef-dd7a4e5aba87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.301697795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0661a956-7be6-424c-83ef-dd7a4e5aba87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.348892019Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85f33ca3-a29e-4e54-88dc-f8695dfdda09 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.348984462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85f33ca3-a29e-4e54-88dc-f8695dfdda09 name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.350652395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfc1fec6-2cc3-4afa-831f-2c4251acd281 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.351212097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344401351185154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfc1fec6-2cc3-4afa-831f-2c4251acd281 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.351812570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c34eac43-695b-41a4-9333-8bd440c773f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.351916478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c34eac43-695b-41a4-9333-8bd440c773f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.352148487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c34eac43-695b-41a4-9333-8bd440c773f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.395375482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b4d801a-ed59-4099-9b9f-7ead09e56bbf name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.395605637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b4d801a-ed59-4099-9b9f-7ead09e56bbf name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.396959116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19246163-d2c4-4eef-b7f8-ced8182f202b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.397340199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344401397317953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19246163-d2c4-4eef-b7f8-ced8182f202b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.398090357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63c9aadd-96f3-4235-abbf-8af657724349 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.398231727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63c9aadd-96f3-4235-abbf-8af657724349 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.398526449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63c9aadd-96f3-4235-abbf-8af657724349 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.443860996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa07e0db-1f1f-4a7f-a3c3-76804af5fbda name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.443966770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa07e0db-1f1f-4a7f-a3c3-76804af5fbda name=/runtime.v1.RuntimeService/Version
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.444940574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dfc0573-46ae-43e6-8d40-641c3d12a453 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.445304193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759344401445282746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dfc0573-46ae-43e6-8d40-641c3d12a453 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.445939211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=399c8bef-8d54-4fd0-9f7f-9dc101be9697 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.446130600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=399c8bef-8d54-4fd0-9f7f-9dc101be9697 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 18:46:41 pause-145303 crio[2560]: time="2025-10-01 18:46:41.446739025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb8986dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759344387092155655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759344387122369069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759344383188647423,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CON
TAINER_RUNNING,CreatedAt:1759344383168766165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759344383137787136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759344379213414067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884,PodSandboxId:5e80eb4ff9530caa490085474cf7d4c0603f379626cb89
86dabf98a241d5d69d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759344358264560882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d67rw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff55d0a-a2d5-449b-ac2e-20c7ae9fddf3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea,PodSandboxId:91a76775c46a3eb9d771aeca82f1938b2a694ec0e52ef539d02f76305c3e1ccc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759344357400752854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh8vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd499d04-e196-4e19-ad92-5e1a46fc3d51,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6,PodSandboxId:428515ebfeb7d064f0cc0315214c3da0465b285cfe9ec502c43af951f489b4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759344357328510675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc784b970b667fba6b12fc99f644d12f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a,PodSandboxId:aa49b1affc3e917df891d4d2d765b510f87e65c24791817d97d5bc3ef457eeb1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759344357251114460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b429017690d6fef30b63ef8f7852c2e6,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882,PodSandboxId:c454214201261ce4d46f733ff1c474df3ca26ba05459f6681dfbfe346bc80379,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759344357216396321,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-14
5303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78e5c9e0146eecca40c491bf8303ca7a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6,PodSandboxId:44bd58df1c52b77d52bd2c0bd2c7a9f198c2030f09ecad275c0551f6034cd125,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759344357148228618,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145303,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22c8aa7d692880b453e8c158cfa2a75,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=399c8bef-8d54-4fd0-9f7f-9dc101be9697 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4df559882bb9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   14 seconds ago      Running             kube-proxy                2                   91a76775c46a3       kube-proxy-wh8vc
	0ad41df0fc5c7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   5e80eb4ff9530       coredns-66bc5c9577-d67rw
	00d53babc9f40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 seconds ago      Running             kube-apiserver            2                   c454214201261       kube-apiserver-pause-145303
	dcd2214b9ae79       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      2                   44bd58df1c52b       etcd-pause-145303
	54ee5f9edf306       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   18 seconds ago      Running             kube-scheduler            2                   428515ebfeb7d       kube-scheduler-pause-145303
	288990005b545       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago      Running             kube-controller-manager   2                   aa49b1affc3e9       kube-controller-manager-pause-145303
	7e9238f55b3f9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   5e80eb4ff9530       coredns-66bc5c9577-d67rw
	51448fb688338       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   44 seconds ago      Exited              kube-proxy                1                   91a76775c46a3       kube-proxy-wh8vc
	18b4c0db76fd8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   44 seconds ago      Exited              kube-scheduler            1                   428515ebfeb7d       kube-scheduler-pause-145303
	1fc9354609697       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   44 seconds ago      Exited              kube-controller-manager   1                   aa49b1affc3e9       kube-controller-manager-pause-145303
	c2a573a84aa07       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   44 seconds ago      Exited              kube-apiserver            1                   c454214201261       kube-apiserver-pause-145303
	383b445295ebf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   44 seconds ago      Exited              etcd                      1                   44bd58df1c52b       etcd-pause-145303
	
	
	==> coredns [0ad41df0fc5c7fc6de6b3d3b2b50c8ac14e1d5cb5b13605f2e764dbabcaa384b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 19697 "HINFO IN 8657941590688261134.4338020412912103796. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.162680004s
	
	
	==> coredns [7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33119 - 59158 "HINFO IN 1206841871795915647.272757696560402739. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.115590619s
	
	
	==> describe nodes <==
	Name:               pause-145303
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-145303
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=pause-145303
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T18_44_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 18:44:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-145303
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 18:46:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 18:46:27 +0000   Wed, 01 Oct 2025 18:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    pause-145303
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 43501cd7ff6b4745a769dcb0ca4ca74a
	  System UUID:                43501cd7-ff6b-4745-a769-dcb0ca4ca74a
	  Boot ID:                    16c06935-5dff-4035-a28a-12e1ef3d5586
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-d67rw                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     112s
	  kube-system                 etcd-pause-145303                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         119s
	  kube-system                 kube-apiserver-pause-145303             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-pause-145303    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-wh8vc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-pause-145303             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeReady                116s                 kubelet          Node pause-145303 status is now: NodeReady
	  Normal  RegisteredNode           113s                 node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	  Normal  RegisteredNode           36s                  node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node pause-145303 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node pause-145303 status is now: NodeHasSufficientMemory
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node pause-145303 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                  node-controller  Node pause-145303 event: Registered Node pause-145303 in Controller
	
	
	==> dmesg <==
	[Oct 1 18:44] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001565] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000110] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.182382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109825] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103580] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.132126] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 1 18:45] kauditd_printk_skb: 190 callbacks suppressed
	[Oct 1 18:46] kauditd_printk_skb: 297 callbacks suppressed
	[  +3.233597] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.548806] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.070619] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.400719] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [383b445295ebf8d828b59348b870537b53da5568c9e94b437f4059218e8333b6] <==
	{"level":"warn","ts":"2025-10-01T18:46:00.711669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.734663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.748550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.769441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.786923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:46:00.860954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	2025/10/01 18:46:09 WARNING: [core] [Server #4]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-10-01T18:46:19.411801Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-01T18:46:19.412083Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-145303","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"error","ts":"2025-10-01T18:46:19.412234Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:46:19.414354Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:46:19.414787Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.415513Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"warn","ts":"2025-10-01T18:46:19.415916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:46:19.416048Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:46:19.416217Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.100:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.416411Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-01T18:46:19.416544Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-01T18:46:19.417227Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:46:19.417351Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:46:19.417470Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.425260Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"error","ts":"2025-10-01T18:46:19.427111Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.100:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:46:19.427411Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2025-10-01T18:46:19.427518Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-145303","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [dcd2214b9ae796fde65328eca5376d39ef42c4d09a72f722970043c0798346b3] <==
	{"level":"warn","ts":"2025-10-01T18:46:26.632333Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.093195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-01T18:46:26.632383Z","caller":"traceutil/trace.go:172","msg":"trace[703385182] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:536; }","duration":"158.16095ms","start":"2025-10-01T18:46:26.474214Z","end":"2025-10-01T18:46:26.632375Z","steps":["trace[703385182] 'agreement among raft nodes before linearized reading'  (duration: 158.030266ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.886198Z","caller":"traceutil/trace.go:172","msg":"trace[1095208122] linearizableReadLoop","detail":"{readStateIndex:573; appliedIndex:573; }","duration":"254.021127ms","start":"2025-10-01T18:46:26.632161Z","end":"2025-10-01T18:46:26.886182Z","steps":["trace[1095208122] 'read index received'  (duration: 254.016018ms)","trace[1095208122] 'applied index is now lower than readState.Index'  (duration: 4.372µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T18:46:26.887332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"413.064098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-01T18:46:26.887388Z","caller":"traceutil/trace.go:172","msg":"trace[1985169609] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:536; }","duration":"413.136334ms","start":"2025-10-01T18:46:26.474243Z","end":"2025-10-01T18:46:26.887379Z","steps":["trace[1985169609] 'agreement among raft nodes before linearized reading'  (duration: 412.010906ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.887431Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.474238Z","time spent":"413.185185ms","remote":"127.0.0.1:36032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":201,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-10-01T18:46:26.887654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"398.533503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T18:46:26.887746Z","caller":"traceutil/trace.go:172","msg":"trace[2124331021] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:536; }","duration":"398.67259ms","start":"2025-10-01T18:46:26.489062Z","end":"2025-10-01T18:46:26.887735Z","steps":["trace[2124331021] 'agreement among raft nodes before linearized reading'  (duration: 397.18084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.887899Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.489051Z","time spent":"398.720504ms","remote":"127.0.0.1:35886","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-10-01T18:46:26.887912Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.116434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wh8vc\" limit:1 ","response":"range_response_count:1 size:5389"}
	{"level":"info","ts":"2025-10-01T18:46:26.887935Z","caller":"traceutil/trace.go:172","msg":"trace[614680544] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wh8vc; range_end:; response_count:1; response_revision:538; }","duration":"254.141422ms","start":"2025-10-01T18:46:26.633787Z","end":"2025-10-01T18:46:26.887929Z","steps":["trace[614680544] 'agreement among raft nodes before linearized reading'  (duration: 254.021591ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.888161Z","caller":"traceutil/trace.go:172","msg":"trace[1295107224] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"407.08459ms","start":"2025-10-01T18:46:26.481069Z","end":"2025-10-01T18:46:26.888153Z","steps":["trace[1295107224] 'process raft request'  (duration: 405.16838ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888414Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.490521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-d67rw\" limit:1 ","response":"range_response_count:1 size:5844"}
	{"level":"info","ts":"2025-10-01T18:46:26.888441Z","caller":"traceutil/trace.go:172","msg":"trace[378514300] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-d67rw; range_end:; response_count:1; response_revision:538; }","duration":"251.522669ms","start":"2025-10-01T18:46:26.636911Z","end":"2025-10-01T18:46:26.888434Z","steps":["trace[378514300] 'agreement among raft nodes before linearized reading'  (duration: 251.434374ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:26.888575Z","caller":"traceutil/trace.go:172","msg":"trace[359360957] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"375.046548ms","start":"2025-10-01T18:46:26.513520Z","end":"2025-10-01T18:46:26.888566Z","steps":["trace[359360957] 'process raft request'  (duration: 374.228467ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888641Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.481052Z","time spent":"407.143892ms","remote":"127.0.0.1:36134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-145303\" mod_revision:449 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-145303\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-145303\" > >"}
	{"level":"warn","ts":"2025-10-01T18:46:26.888676Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.513503Z","time spent":"375.134794ms","remote":"127.0.0.1:36134","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" mod_revision:446 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-7zq4xwlb5bn3fbhss5o2p32ec4\" > >"}
	{"level":"info","ts":"2025-10-01T18:46:26.888738Z","caller":"traceutil/trace.go:172","msg":"trace[1764461780] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"353.203654ms","start":"2025-10-01T18:46:26.535529Z","end":"2025-10-01T18:46:26.888732Z","steps":["trace[1764461780] 'process raft request'  (duration: 352.259753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:26.888764Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-01T18:46:26.535510Z","time spent":"353.240023ms","remote":"127.0.0.1:35986","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/minions/pause-145303\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-145303\" value_size:3846 >> failure:<>"}
	{"level":"info","ts":"2025-10-01T18:46:27.034660Z","caller":"traceutil/trace.go:172","msg":"trace[1774351475] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:576; }","duration":"132.393886ms","start":"2025-10-01T18:46:26.902242Z","end":"2025-10-01T18:46:27.034636Z","steps":["trace[1774351475] 'read index received'  (duration: 132.386932ms)","trace[1774351475] 'applied index is now lower than readState.Index'  (duration: 6.015µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-01T18:46:27.035133Z","caller":"traceutil/trace.go:172","msg":"trace[774360137] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"133.371803ms","start":"2025-10-01T18:46:26.901747Z","end":"2025-10-01T18:46:27.035118Z","steps":["trace[774360137] 'process raft request'  (duration: 133.010746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:46:27.035525Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.26241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-145303\" limit:1 ","response":"range_response_count:1 size:5280"}
	{"level":"info","ts":"2025-10-01T18:46:27.036979Z","caller":"traceutil/trace.go:172","msg":"trace[230917304] range","detail":"{range_begin:/registry/minions/pause-145303; range_end:; response_count:1; response_revision:538; }","duration":"134.724358ms","start":"2025-10-01T18:46:26.902239Z","end":"2025-10-01T18:46:27.036963Z","steps":["trace[230917304] 'agreement among raft nodes before linearized reading'  (duration: 132.472392ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:27.040101Z","caller":"traceutil/trace.go:172","msg":"trace[2020945473] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"130.790182ms","start":"2025-10-01T18:46:26.909299Z","end":"2025-10-01T18:46:27.040089Z","steps":["trace[2020945473] 'process raft request'  (duration: 130.566471ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:46:27.040392Z","caller":"traceutil/trace.go:172","msg":"trace[1554902488] transaction","detail":"{read_only:false; number_of_response:0; response_revision:538; }","duration":"138.400395ms","start":"2025-10-01T18:46:26.901978Z","end":"2025-10-01T18:46:27.040379Z","steps":["trace[1554902488] 'process raft request'  (duration: 137.471629ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:46:41 up 2 min,  0 users,  load average: 0.87, 0.43, 0.16
	Linux pause-145303 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [00d53babc9f40cae5475929ca20dccc5e233ddcad92caefea5e3ad4d0ac9ba22] <==
	I1001 18:46:26.389322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 18:46:26.402927       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1001 18:46:26.410794       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1001 18:46:26.410939       1 aggregator.go:171] initial CRD sync complete...
	I1001 18:46:26.410988       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 18:46:26.411019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 18:46:26.411039       1 cache.go:39] Caches are synced for autoregister controller
	I1001 18:46:26.425537       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1001 18:46:26.430326       1 policy_source.go:240] refreshing policies
	I1001 18:46:26.468453       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 18:46:26.473657       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1001 18:46:26.474804       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1001 18:46:26.475053       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1001 18:46:26.475994       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1001 18:46:26.480569       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 18:46:26.633100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1001 18:46:27.320799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 18:46:27.824793       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100]
	I1001 18:46:27.827018       1 controller.go:667] quota admission added evaluator for: endpoints
	I1001 18:46:27.837515       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 18:46:28.256039       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1001 18:46:28.298771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1001 18:46:28.336589       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 18:46:28.345295       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 18:46:33.969518       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [c2a573a84aa070b6e6a8de2269a6829f188c0eb8932aff8b1979732566011882] <==
	I1001 18:46:09.268960       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1001 18:46:09.268981       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1001 18:46:09.268996       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1001 18:46:09.269009       1 controller.go:132] Ending legacy_token_tracking_controller
	I1001 18:46:09.269013       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1001 18:46:09.269405       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1001 18:46:09.269461       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I1001 18:46:09.269475       1 establishing_controller.go:92] Shutting down EstablishingController
	I1001 18:46:09.269502       1 naming_controller.go:310] Shutting down NamingConditionController
	I1001 18:46:09.269516       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1001 18:46:09.269537       1 controller.go:170] Shutting down OpenAPI controller
	I1001 18:46:09.269549       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1001 18:46:09.269758       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1001 18:46:09.271975       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 18:46:09.272106       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 18:46:09.272595       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1001 18:46:09.272618       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1001 18:46:09.272649       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 18:46:09.272878       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1001 18:46:09.272957       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1001 18:46:09.273077       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1001 18:46:09.273162       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1001 18:46:09.273188       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 18:46:09.273403       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1001 18:46:09.273594       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [1fc93546096971e7fa2314ee2051afd48e49865d10ef5eb993c4ea461efb5f6a] <==
	I1001 18:46:05.194657       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1001 18:46:05.194663       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1001 18:46:05.195902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1001 18:46:05.196003       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1001 18:46:05.199170       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1001 18:46:05.203518       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1001 18:46:05.210882       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1001 18:46:05.210891       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:46:05.214289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1001 18:46:05.214355       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1001 18:46:05.215651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1001 18:46:05.215718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1001 18:46:05.216766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1001 18:46:05.217946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1001 18:46:05.220419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1001 18:46:05.222800       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1001 18:46:05.222965       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 18:46:05.223077       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-145303"
	I1001 18:46:05.223128       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 18:46:05.224285       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1001 18:46:05.226741       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1001 18:46:05.226882       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:46:05.227700       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1001 18:46:05.231511       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1001 18:46:05.232805       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [288990005b545f3a111a56b1c55590e2feea823f94275559258de79f7c943481] <==
	I1001 18:46:29.816311       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1001 18:46:29.817941       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1001 18:46:29.817952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1001 18:46:29.817962       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1001 18:46:29.820711       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1001 18:46:29.820721       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1001 18:46:29.824032       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:46:29.828765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1001 18:46:29.835452       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1001 18:46:29.837732       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:46:29.839069       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1001 18:46:29.851973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:46:29.852050       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1001 18:46:29.852076       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 18:46:29.855233       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1001 18:46:29.855298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1001 18:46:29.855360       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1001 18:46:29.855466       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1001 18:46:29.855582       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 18:46:29.855610       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1001 18:46:29.855706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-145303"
	I1001 18:46:29.855936       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 18:46:29.855601       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1001 18:46:29.855786       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1001 18:46:29.865303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea] <==
	I1001 18:45:58.825878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:46:01.927607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:46:01.927738       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1001 18:46:01.928050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:46:01.996077       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1001 18:46:01.997007       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:46:01.997106       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:46:02.204036       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:46:02.204663       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:46:02.204760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:02.211311       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:46:02.211488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:46:02.212115       1 config.go:200] "Starting service config controller"
	I1001 18:46:02.212187       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:46:02.220928       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:46:02.220955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:46:02.228696       1 config.go:309] "Starting node config controller"
	I1001 18:46:02.228724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:46:02.228874       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:46:02.312279       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1001 18:46:02.312367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:46:02.321709       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a4df559882bb9b5f729b5ea7a4903e797bb5aa6123724800b797489956dce569] <==
	I1001 18:46:27.442637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:46:27.543469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:46:27.543526       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.100"]
	E1001 18:46:27.543596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:46:27.616265       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1001 18:46:27.616384       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 18:46:27.616511       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:46:27.640690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:46:27.641103       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:46:27.641117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:27.647087       1 config.go:200] "Starting service config controller"
	I1001 18:46:27.647109       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:46:27.647136       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:46:27.647143       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:46:27.647159       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:46:27.650112       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:46:27.650329       1 config.go:309] "Starting node config controller"
	I1001 18:46:27.650351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:46:27.650358       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:46:27.747752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1001 18:46:27.747888       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:46:27.750421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [18b4c0db76fd81479bcca9eb8973902b27d2b5a65b037fcf7cd9480a34e96ff6] <==
	I1001 18:45:59.928910       1 serving.go:386] Generated self-signed cert in-memory
	I1001 18:46:02.914039       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:46:02.914191       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:02.924366       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:46:02.924376       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1001 18:46:02.924481       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1001 18:46:02.924529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:46:02.924543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:02.924555       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:02.924571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:02.924580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:03.025124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:03.025199       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1001 18:46:03.025390       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:08.926099       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1001 18:46:08.926127       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1001 18:46:08.926146       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1001 18:46:08.926168       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:08.926743       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1001 18:46:08.926949       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:46:08.927281       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1001 18:46:08.927376       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [54ee5f9edf306e883a32b0d8bcfd138521f67a55b38ab5c52e3327d1dd0844e1] <==
	I1001 18:46:24.204504       1 serving.go:386] Generated self-signed cert in-memory
	W1001 18:46:26.330087       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 18:46:26.330190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 18:46:26.330233       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 18:46:26.330255       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 18:46:26.416758       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:46:26.418623       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:46:26.421169       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:26.421237       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:46:26.421526       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:46:26.421601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:46:26.522142       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 01 18:46:25 pause-145303 kubelet[3747]: E1001 18:46:25.667232    3747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145303\" not found" node="pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: E1001 18:46:26.013129    3747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145303\" not found" node="pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.353995    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-145303"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.421604    3747 apiserver.go:52] "Watching apiserver"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.458110    3747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.470717    3747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd499d04-e196-4e19-ad92-5e1a46fc3d51-xtables-lock\") pod \"kube-proxy-wh8vc\" (UID: \"cd499d04-e196-4e19-ad92-5e1a46fc3d51\") " pod="kube-system/kube-proxy-wh8vc"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.471001    3747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd499d04-e196-4e19-ad92-5e1a46fc3d51-lib-modules\") pod \"kube-proxy-wh8vc\" (UID: \"cd499d04-e196-4e19-ad92-5e1a46fc3d51\") " pod="kube-system/kube-proxy-wh8vc"
	Oct 01 18:46:26 pause-145303 kubelet[3747]: I1001 18:46:26.668938    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.035924    3747 scope.go:117] "RemoveContainer" containerID="51448fb6883387a5f60abbb2cd2b5ed7102a06102bb871b969563426756fc6ea"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.036719    3747 scope.go:117] "RemoveContainer" containerID="7e9238f55b3f9200c49898c72c40ced99a5cadc4de43c9661349829003a76884"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.044294    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-145303\" already exists" pod="kube-system/etcd-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.044314    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.051915    3747 kubelet_node_status.go:124] "Node was previously registered" node="pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.054562    3747 kubelet_node_status.go:78] "Successfully registered node" node="pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.054620    3747 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.055372    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145303\" already exists" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.060351    3747 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.072511    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-145303\" already exists" pod="kube-system/kube-apiserver-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.072617    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.106053    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-145303\" already exists" pod="kube-system/kube-controller-manager-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: I1001 18:46:27.106158    3747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:27 pause-145303 kubelet[3747]: E1001 18:46:27.169370    3747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145303\" already exists" pod="kube-system/kube-scheduler-pause-145303"
	Oct 01 18:46:32 pause-145303 kubelet[3747]: E1001 18:46:32.625357    3747 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344392624901865  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 18:46:32 pause-145303 kubelet[3747]: E1001 18:46:32.625399    3747 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344392624901865  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 01 18:46:33 pause-145303 kubelet[3747]: I1001 18:46:33.931182    3747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145303 -n pause-145303
helpers_test.go:269: (dbg) Run:  kubectl --context pause-145303 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (76.29s)

                                                
                                    

Test pass (280/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.13
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 11.43
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.95
22 TestOffline 76.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.18
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 197.71
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 11.49
35 TestAddons/parallel/Registry 17.59
36 TestAddons/parallel/RegistryCreds 0.67
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 6.02
41 TestAddons/parallel/CSI 55.74
42 TestAddons/parallel/Headlamp 20.8
43 TestAddons/parallel/CloudSpanner 6.69
44 TestAddons/parallel/LocalPath 16.18
45 TestAddons/parallel/NvidiaDevicePlugin 6.88
46 TestAddons/parallel/Yakd 12.05
48 TestAddons/StoppedEnableDisable 89.39
49 TestCertOptions 72.54
52 TestForceSystemdFlag 61.05
53 TestForceSystemdEnv 68.55
55 TestKVMDriverInstallOrUpdate 0.74
59 TestErrorSpam/setup 36.73
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.64
63 TestErrorSpam/unpause 1.83
64 TestErrorSpam/stop 4.6
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 81.68
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.02
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
76 TestFunctional/serial/CacheCmd/cache/add_local 2.12
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 50.31
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.41
87 TestFunctional/serial/LogsFileCmd 1.39
88 TestFunctional/serial/InvalidService 3.97
90 TestFunctional/parallel/ConfigCmd 0.33
91 TestFunctional/parallel/DashboardCmd 19.05
92 TestFunctional/parallel/DryRun 0.26
93 TestFunctional/parallel/InternationalLanguage 0.13
94 TestFunctional/parallel/StatusCmd 0.81
98 TestFunctional/parallel/ServiceCmdConnect 12.55
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 46.32
102 TestFunctional/parallel/SSHCmd 0.36
103 TestFunctional/parallel/CpCmd 1.3
104 TestFunctional/parallel/MySQL 32.57
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.29
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
114 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.64
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.45
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
122 TestFunctional/parallel/ImageCommands/ImageBuild 6.74
123 TestFunctional/parallel/ImageCommands/Setup 1.76
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
135 TestFunctional/parallel/ProfileCmd/profile_list 0.38
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
138 TestFunctional/parallel/MountCmd/any-port 8.64
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
144 TestFunctional/parallel/ServiceCmd/List 0.27
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
150 TestFunctional/parallel/ServiceCmd/Format 0.29
151 TestFunctional/parallel/ServiceCmd/URL 0.29
152 TestFunctional/parallel/MountCmd/specific-port 1.68
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 241.15
162 TestMultiControlPlane/serial/DeployApp 7.14
163 TestMultiControlPlane/serial/PingHostFromPods 1.19
164 TestMultiControlPlane/serial/AddWorkerNode 44.09
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
167 TestMultiControlPlane/serial/CopyFile 12.68
168 TestMultiControlPlane/serial/StopSecondaryNode 72.97
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
170 TestMultiControlPlane/serial/RestartSecondaryNode 43.15
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.46
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.44
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 255.12
176 TestMultiControlPlane/serial/RestartCluster 104.36
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 76.12
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
183 TestJSONOutput/start/Command 85.12
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.58
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 78.42
215 TestMountStart/serial/StartWithMountFirst 21.17
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 23.39
218 TestMountStart/serial/VerifyMountSecond 0.36
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 19.77
223 TestMountStart/serial/VerifyMountPostStop 0.36
226 TestMultiNode/serial/FreshStart2Nodes 127.61
227 TestMultiNode/serial/DeployApp2Nodes 5.73
228 TestMultiNode/serial/PingHostFrom2Pods 0.75
229 TestMultiNode/serial/AddNode 42.29
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.58
232 TestMultiNode/serial/CopyFile 7.14
233 TestMultiNode/serial/StopNode 2.42
234 TestMultiNode/serial/StartAfterStop 39.02
235 TestMultiNode/serial/RestartKeepsNodes 329.12
236 TestMultiNode/serial/DeleteNode 2.74
237 TestMultiNode/serial/StopMultiNode 165.49
238 TestMultiNode/serial/RestartMultiNode 94.96
239 TestMultiNode/serial/ValidateNameConflict 39.1
246 TestScheduledStopUnix 107.34
250 TestRunningBinaryUpgrade 97.98
252 TestKubernetesUpgrade 168.54
256 TestPause/serial/Start 79.62
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestStoppedBinaryUpgrade/Setup 2.64
259 TestNoKubernetes/serial/StartWithK8s 82.19
260 TestStoppedBinaryUpgrade/Upgrade 146.61
262 TestNoKubernetes/serial/StartWithStopK8s 35.97
263 TestNoKubernetes/serial/Start 33.32
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
266 TestNoKubernetes/serial/ProfileList 8.71
267 TestNoKubernetes/serial/Stop 1.27
268 TestNoKubernetes/serial/StartNoArgs 61.52
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestNetworkPlugins/group/false 3.26
289 TestStartStop/group/old-k8s-version/serial/FirstStart 99.86
291 TestStartStop/group/no-preload/serial/FirstStart 89.34
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.02
294 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
296 TestStartStop/group/old-k8s-version/serial/Stop 73.91
297 TestStartStop/group/no-preload/serial/DeployApp 10.28
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
300 TestStartStop/group/no-preload/serial/Stop 88.23
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.75
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 45.11
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.63
307 TestStartStop/group/no-preload/serial/SecondStart 58.59
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.54
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
312 TestStartStop/group/old-k8s-version/serial/Pause 2.78
314 TestStartStop/group/newest-cni/serial/FirstStart 64.86
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.18
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/no-preload/serial/Pause 3.31
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/embed-certs/serial/FirstStart 78.56
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.98
325 TestNetworkPlugins/group/auto/Start 105.18
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
328 TestStartStop/group/newest-cni/serial/Stop 7.01
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
330 TestStartStop/group/newest-cni/serial/SecondStart 56.84
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
334 TestStartStop/group/newest-cni/serial/Pause 2.76
335 TestNetworkPlugins/group/kindnet/Start 86.37
336 TestStartStop/group/embed-certs/serial/DeployApp 11.32
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
338 TestStartStop/group/embed-certs/serial/Stop 85.72
339 TestNetworkPlugins/group/auto/KubeletFlags 0.23
340 TestNetworkPlugins/group/auto/NetCatPod 11.25
341 TestNetworkPlugins/group/auto/DNS 0.15
342 TestNetworkPlugins/group/auto/Localhost 0.13
343 TestNetworkPlugins/group/auto/HairPin 0.14
344 TestNetworkPlugins/group/calico/Start 63.51
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
349 TestStartStop/group/embed-certs/serial/SecondStart 43.63
350 TestNetworkPlugins/group/kindnet/DNS 0.17
351 TestNetworkPlugins/group/kindnet/Localhost 0.15
352 TestNetworkPlugins/group/kindnet/HairPin 0.14
353 TestNetworkPlugins/group/custom-flannel/Start 73.76
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.3
356 TestNetworkPlugins/group/calico/NetCatPod 12.34
357 TestNetworkPlugins/group/calico/DNS 0.18
358 TestNetworkPlugins/group/calico/Localhost 0.15
359 TestNetworkPlugins/group/calico/HairPin 0.13
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 18.01
361 TestNetworkPlugins/group/enable-default-cni/Start 53.84
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
364 TestStartStop/group/embed-certs/serial/Pause 3.15
365 TestNetworkPlugins/group/flannel/Start 79.57
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
368 TestNetworkPlugins/group/custom-flannel/DNS 0.23
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
373 TestNetworkPlugins/group/bridge/Start 89.65
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
379 TestNetworkPlugins/group/flannel/NetCatPod 10.25
380 TestNetworkPlugins/group/flannel/DNS 0.15
381 TestNetworkPlugins/group/flannel/Localhost 0.12
382 TestNetworkPlugins/group/flannel/HairPin 0.13
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
384 TestNetworkPlugins/group/bridge/NetCatPod 11.26
385 TestNetworkPlugins/group/bridge/DNS 0.13
386 TestNetworkPlugins/group/bridge/Localhost 0.11
387 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (23.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-805203 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-805203 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.127298297s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1001 17:47:34.977572   13469 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1001 17:47:34.977681   13469 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-805203
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-805203: exit status 85 (60.33526ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-805203 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-805203 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 17:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 17:47:11.889311   13483 out.go:360] Setting OutFile to fd 1 ...
	I1001 17:47:11.889555   13483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:11.889565   13483 out.go:374] Setting ErrFile to fd 2...
	I1001 17:47:11.889572   13483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:11.889770   13483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	W1001 17:47:11.889918   13483 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21631-9542/.minikube/config/config.json: open /home/jenkins/minikube-integration/21631-9542/.minikube/config/config.json: no such file or directory
	I1001 17:47:11.890407   13483 out.go:368] Setting JSON to true
	I1001 17:47:11.891262   13483 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1776,"bootTime":1759339056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 17:47:11.891348   13483 start.go:140] virtualization: kvm guest
	I1001 17:47:11.893486   13483 out.go:99] [download-only-805203] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1001 17:47:11.893619   13483 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 17:47:11.893668   13483 notify.go:220] Checking for updates...
	I1001 17:47:11.894896   13483 out.go:171] MINIKUBE_LOCATION=21631
	I1001 17:47:11.896331   13483 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 17:47:11.897662   13483 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:47:11.898855   13483 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:47:11.900009   13483 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 17:47:11.902362   13483 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 17:47:11.902595   13483 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 17:47:12.384341   13483 out.go:99] Using the kvm2 driver based on user configuration
	I1001 17:47:12.384369   13483 start.go:304] selected driver: kvm2
	I1001 17:47:12.384377   13483 start.go:921] validating driver "kvm2" against <nil>
	I1001 17:47:12.384735   13483 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:12.384865   13483 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:12.398966   13483 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:12.398997   13483 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:12.411903   13483 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:12.411946   13483 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 17:47:12.412487   13483 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1001 17:47:12.412673   13483 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 17:47:12.412708   13483 cni.go:84] Creating CNI manager for ""
	I1001 17:47:12.412762   13483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 17:47:12.412775   13483 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 17:47:12.412832   13483 start.go:348] cluster config:
	{Name:download-only-805203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-805203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:47:12.413027   13483 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:12.414832   13483 out.go:99] Downloading VM boot image ...
	I1001 17:47:12.414869   13483 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21631-9542/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1001 17:47:22.344886   13483 out.go:99] Starting "download-only-805203" primary control-plane node in "download-only-805203" cluster
	I1001 17:47:22.344916   13483 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1001 17:47:22.439397   13483 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1001 17:47:22.439454   13483 cache.go:58] Caching tarball of preloaded images
	I1001 17:47:22.439622   13483 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1001 17:47:22.441248   13483 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1001 17:47:22.441267   13483 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1001 17:47:22.539483   13483 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1001 17:47:22.539631   13483 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-805203 host does not exist
	  To start a cluster, run: "minikube start -p download-only-805203"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-805203
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-807514 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-807514 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (11.429753662s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1001 17:47:46.740966   13469 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1001 17:47:46.741010   13469 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-807514
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-807514: exit status 85 (60.417954ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-805203 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-805203 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │ 01 Oct 25 17:47 UTC │
	│ delete  │ -p download-only-805203                                                                                                                                                                             │ download-only-805203 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │ 01 Oct 25 17:47 UTC │
	│ start   │ -o=json --download-only -p download-only-807514 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-807514 │ jenkins │ v1.37.0 │ 01 Oct 25 17:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 17:47:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 17:47:35.353551   13754 out.go:360] Setting OutFile to fd 1 ...
	I1001 17:47:35.353794   13754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:35.353802   13754 out.go:374] Setting ErrFile to fd 2...
	I1001 17:47:35.353806   13754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:47:35.353963   13754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 17:47:35.354406   13754 out.go:368] Setting JSON to true
	I1001 17:47:35.355251   13754 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1799,"bootTime":1759339056,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 17:47:35.355326   13754 start.go:140] virtualization: kvm guest
	I1001 17:47:35.357169   13754 out.go:99] [download-only-807514] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 17:47:35.357325   13754 notify.go:220] Checking for updates...
	I1001 17:47:35.358735   13754 out.go:171] MINIKUBE_LOCATION=21631
	I1001 17:47:35.360126   13754 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 17:47:35.361651   13754 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:47:35.363007   13754 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:47:35.364079   13754 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 17:47:35.366310   13754 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 17:47:35.366553   13754 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 17:47:35.400367   13754 out.go:99] Using the kvm2 driver based on user configuration
	I1001 17:47:35.400418   13754 start.go:304] selected driver: kvm2
	I1001 17:47:35.400424   13754 start.go:921] validating driver "kvm2" against <nil>
	I1001 17:47:35.400740   13754 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:35.400814   13754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:35.414721   13754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:35.414748   13754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21631-9542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 17:47:35.427534   13754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1001 17:47:35.427574   13754 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 17:47:35.428098   13754 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1001 17:47:35.428233   13754 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 17:47:35.428260   13754 cni.go:84] Creating CNI manager for ""
	I1001 17:47:35.428302   13754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 17:47:35.428310   13754 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 17:47:35.428347   13754 start.go:348] cluster config:
	{Name:download-only-807514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-807514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:47:35.428423   13754 iso.go:125] acquiring lock: {Name:mke4f33636eb3043bce5a51fcbb56cd6b63e4b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 17:47:35.429996   13754 out.go:99] Starting "download-only-807514" primary control-plane node in "download-only-807514" cluster
	I1001 17:47:35.430010   13754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 17:47:35.848938   13754 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1001 17:47:35.848998   13754 cache.go:58] Caching tarball of preloaded images
	I1001 17:47:35.849211   13754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 17:47:35.851006   13754 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1001 17:47:35.851026   13754 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1001 17:47:35.949836   13754 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1001 17:47:35.949884   13754 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21631-9542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-807514 host does not exist
	  To start a cluster, run: "minikube start -p download-only-807514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-807514
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.95s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 17:47:47.327073   13469 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-971827 --alsologtostderr --binary-mirror http://127.0.0.1:42391 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-971827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-971827
--- PASS: TestBinaryMirror (0.95s)

                                                
                                    
x
+
TestOffline (76.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-136397 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-136397 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.433301778s)
helpers_test.go:175: Cleaning up "offline-crio-136397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-136397
--- PASS: TestOffline (76.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-289249
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-289249: exit status 85 (175.095018ms)

                                                
                                                
-- stdout --
	* Profile "addons-289249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-289249
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-289249: exit status 85 (174.605682ms)

                                                
                                                
-- stdout --
	* Profile "addons-289249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (197.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-289249 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-289249 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m17.713815192s)
--- PASS: TestAddons/Setup (197.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-289249 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-289249 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-289249 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-289249 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1a65eee9-eb8a-4623-a414-70ae928f0499] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1a65eee9-eb8a-4623-a414-70ae928f0499] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004067564s
addons_test.go:694: (dbg) Run:  kubectl --context addons-289249 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-289249 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-289249 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.764094ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-l4mmr" [65f1723b-c2a7-4cd0-b4dd-56463fe8a7df] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004090325s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rzht2" [b910836a-896c-4366-866b-eea6834f1e7e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007774127s
addons_test.go:392: (dbg) Run:  kubectl --context addons-289249 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-289249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-289249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.722460323s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 ip
2025/10/01 17:51:43 [DEBUG] GET http://192.168.39.98:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.20689ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-289249
addons_test.go:332: (dbg) Run:  kubectl --context addons-289249 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-6sfsx" [3c428d56-4ab5-4dc2-af3c-b06291b79dfd] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.008045864s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.224965ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9vxgb" [9266edab-0025-4ccb-8c18-124badd0f0db] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006640534s
addons_test.go:463: (dbg) Run:  kubectl --context addons-289249 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 17:51:39.498251   13469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 17:51:39.510649   13469 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 17:51:39.510692   13469 kapi.go:107] duration metric: took 12.454192ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.469528ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-289249 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-289249 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a5f1b33d-0cb0-48ee-93d1-5c4259e2192e] Pending
helpers_test.go:352: "task-pv-pod" [a5f1b33d-0cb0-48ee-93d1-5c4259e2192e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a5f1b33d-0cb0-48ee-93d1-5c4259e2192e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004527398s
addons_test.go:572: (dbg) Run:  kubectl --context addons-289249 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-289249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-289249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-289249 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-289249 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-289249 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-289249 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f39dbf33-522e-42ef-b13d-c84fd52574ac] Pending
helpers_test.go:352: "task-pv-pod-restore" [f39dbf33-522e-42ef-b13d-c84fd52574ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f39dbf33-522e-42ef-b13d-c84fd52574ac] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005307619s
addons_test.go:614: (dbg) Run:  kubectl --context addons-289249 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-289249 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-289249 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable volumesnapshots --alsologtostderr -v=1: (1.049117941s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.899611765s)
--- PASS: TestAddons/parallel/CSI (55.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-289249 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-pvk85" [18bd412a-b284-49ed-bc93-7e2aa8874ca2] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-pvk85" [18bd412a-b284-49ed-bc93-7e2aa8874ca2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-pvk85" [18bd412a-b284-49ed-bc93-7e2aa8874ca2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.006558902s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable headlamp --alsologtostderr -v=1: (5.864043905s)
--- PASS: TestAddons/parallel/Headlamp (20.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-969vw" [c01f7ec7-c0bf-4c84-b9a2-88021d2a350a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004326025s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-289249 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-289249 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d24bcff4-e18e-4b1e-b144-b304ee4d52f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d24bcff4-e18e-4b1e-b144-b304ee4d52f4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d24bcff4-e18e-4b1e-b144-b304ee4d52f4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003660395s
addons_test.go:967: (dbg) Run:  kubectl --context addons-289249 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 ssh "cat /opt/local-path-provisioner/pvc-f59bf8b6-7ab5-405f-97f9-b1c0ba9ac7a3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-289249 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-289249 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bwg47" [e9a22707-ec3b-4876-a345-51411014cf5f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00541141s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8z6k8" [9d8e39a7-c9b1-4950-9ee9-9037cb1bd195] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.044052404s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-289249 addons disable yakd --alsologtostderr -v=1: (6.001023089s)
--- PASS: TestAddons/parallel/Yakd (12.05s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-289249
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-289249: (1m29.125434276s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-289249
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-289249
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-289249
--- PASS: TestAddons/StoppedEnableDisable (89.39s)

                                                
                                    
x
+
TestCertOptions (72.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-580639 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-580639 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.002085122s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-580639 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-580639 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-580639 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-580639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-580639
--- PASS: TestCertOptions (72.54s)

                                                
                                    
x
+
TestForceSystemdFlag (61.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-893842 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-893842 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.940876744s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-893842 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-893842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-893842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-893842: (1.887910917s)
--- PASS: TestForceSystemdFlag (61.05s)

                                                
                                    
x
+
TestForceSystemdEnv (68.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-506900 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-506900 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.693949424s)
helpers_test.go:175: Cleaning up "force-systemd-env-506900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-506900
--- PASS: TestForceSystemdEnv (68.55s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.74s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1001 18:47:52.087578   13469 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 18:47:52.087755   13469 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3040310905/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1001 18:47:52.121764   13469 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3040310905/001/docker-machine-driver-kvm2 version is 1.1.1
W1001 18:47:52.121825   13469 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1001 18:47:52.121987   13469 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1001 18:47:52.122038   13469 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3040310905/001/docker-machine-driver-kvm2
I1001 18:47:52.694759   13469 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3040310905/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1001 18:47:52.712827   13469 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3040310905/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.74s)

                                                
                                    
x
+
TestErrorSpam/setup (36.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-306641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-306641 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 17:56:06.816329   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:06.825982   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:06.838067   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:06.859498   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:06.900907   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:06.982406   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:07.143983   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:07.465699   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:08.107728   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:09.389341   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:11.952282   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:56:17.074183   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-306641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-306641 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.730846032s)
--- PASS: TestErrorSpam/setup (36.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (4.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 stop: (1.918214813s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 stop: (1.680397843s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306641 --log_dir /tmp/nospam-306641 stop
E1001 17:56:27.316355   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/stop (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21631-9542/.minikube/files/etc/test/nested/copy/13469/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 17:56:47.798351   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 17:57:28.759795   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-042563 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.68192466s)
--- PASS: TestFunctional/serial/StartWithProxy (81.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 17:57:49.600818   13469 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-042563 --alsologtostderr -v=8: (35.019652619s)
functional_test.go:678: soft start took 35.020410834s for "functional-042563" cluster.
I1001 17:58:24.620841   13469 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (35.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-042563 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:3.1: (1.105374486s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:3.3: (1.242158005s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 cache add registry.k8s.io/pause:latest: (1.096205112s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-042563 /tmp/TestFunctionalserialCacheCmdcacheadd_local828299785/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache add minikube-local-cache-test:functional-042563
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 cache add minikube-local-cache-test:functional-042563: (1.792063225s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache delete minikube-local-cache-test:functional-042563
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-042563
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.873695ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 kubectl -- --context functional-042563 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-042563 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 17:58:50.683612   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-042563 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.30843571s)
functional_test.go:776: restart took 50.308548867s for "functional-042563" cluster.
I1001 17:59:22.896056   13469 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (50.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-042563 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 logs: (1.407396859s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 logs --file /tmp/TestFunctionalserialLogsFileCmd4198105936/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 logs --file /tmp/TestFunctionalserialLogsFileCmd4198105936/001/logs.txt: (1.392744605s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-042563 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-042563
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-042563: exit status 115 (276.903583ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.65:31515 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-042563 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 config get cpus: exit status 14 (60.576817ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 config get cpus: exit status 14 (58.287225ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-042563 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-042563 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 21993: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-042563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (126.246746ms)

                                                
                                                
-- stdout --
	* [functional-042563] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 17:59:39.610134   21618 out.go:360] Setting OutFile to fd 1 ...
	I1001 17:59:39.610359   21618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:59:39.610369   21618 out.go:374] Setting ErrFile to fd 2...
	I1001 17:59:39.610373   21618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:59:39.610559   21618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 17:59:39.611022   21618 out.go:368] Setting JSON to false
	I1001 17:59:39.612045   21618 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2524,"bootTime":1759339056,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 17:59:39.612125   21618 start.go:140] virtualization: kvm guest
	I1001 17:59:39.614010   21618 out.go:179] * [functional-042563] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 17:59:39.615257   21618 notify.go:220] Checking for updates...
	I1001 17:59:39.615279   21618 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 17:59:39.616496   21618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 17:59:39.617569   21618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:59:39.618741   21618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:59:39.619815   21618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 17:59:39.620949   21618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 17:59:39.622578   21618 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 17:59:39.623189   21618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:59:39.623265   21618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:59:39.638612   21618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I1001 17:59:39.639175   21618 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:59:39.639792   21618 main.go:141] libmachine: Using API Version  1
	I1001 17:59:39.639830   21618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:59:39.640227   21618 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:59:39.640455   21618 main.go:141] libmachine: (functional-042563) Calling .DriverName
	I1001 17:59:39.640764   21618 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 17:59:39.641206   21618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:59:39.641251   21618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:59:39.654790   21618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I1001 17:59:39.655273   21618 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:59:39.655859   21618 main.go:141] libmachine: Using API Version  1
	I1001 17:59:39.655888   21618 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:59:39.656223   21618 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:59:39.656437   21618 main.go:141] libmachine: (functional-042563) Calling .DriverName
	I1001 17:59:39.688036   21618 out.go:179] * Using the kvm2 driver based on existing profile
	I1001 17:59:39.689145   21618 start.go:304] selected driver: kvm2
	I1001 17:59:39.689159   21618 start.go:921] validating driver "kvm2" against &{Name:functional-042563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:functional-042563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:59:39.689270   21618 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 17:59:39.691193   21618 out.go:203] 
	W1001 17:59:39.692306   21618 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 17:59:39.693412   21618 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-042563 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (127.699095ms)

                                                
                                                
-- stdout --
	* [functional-042563] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 17:59:39.874992   21709 out.go:360] Setting OutFile to fd 1 ...
	I1001 17:59:39.875088   21709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:59:39.875100   21709 out.go:374] Setting ErrFile to fd 2...
	I1001 17:59:39.875107   21709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 17:59:39.875440   21709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 17:59:39.875896   21709 out.go:368] Setting JSON to false
	I1001 17:59:39.876833   21709 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2524,"bootTime":1759339056,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 17:59:39.876920   21709 start.go:140] virtualization: kvm guest
	I1001 17:59:39.878508   21709 out.go:179] * [functional-042563] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1001 17:59:39.880009   21709 notify.go:220] Checking for updates...
	I1001 17:59:39.880034   21709 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 17:59:39.881418   21709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 17:59:39.882848   21709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 17:59:39.884045   21709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 17:59:39.885236   21709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 17:59:39.886379   21709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 17:59:39.888103   21709 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 17:59:39.888690   21709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:59:39.888775   21709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:59:39.902186   21709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I1001 17:59:39.902659   21709 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:59:39.903156   21709 main.go:141] libmachine: Using API Version  1
	I1001 17:59:39.903177   21709 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:59:39.903658   21709 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:59:39.903856   21709 main.go:141] libmachine: (functional-042563) Calling .DriverName
	I1001 17:59:39.904151   21709 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 17:59:39.904607   21709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 17:59:39.904655   21709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 17:59:39.917792   21709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1001 17:59:39.918272   21709 main.go:141] libmachine: () Calling .GetVersion
	I1001 17:59:39.918794   21709 main.go:141] libmachine: Using API Version  1
	I1001 17:59:39.918813   21709 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 17:59:39.919150   21709 main.go:141] libmachine: () Calling .GetMachineName
	I1001 17:59:39.919360   21709 main.go:141] libmachine: (functional-042563) Calling .DriverName
	I1001 17:59:39.950185   21709 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1001 17:59:39.951291   21709 start.go:304] selected driver: kvm2
	I1001 17:59:39.951310   21709 start.go:921] validating driver "kvm2" against &{Name:functional-042563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:functional-042563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 17:59:39.951407   21709 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 17:59:39.953488   21709 out.go:203] 
	W1001 17:59:39.954525   21709 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 17:59:39.955636   21709 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-042563 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-042563 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5nkjj" [14c5d1f1-50ff-4c30-846a-fc6dd11b029d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-5nkjj" [14c5d1f1-50ff-4c30-846a-fc6dd11b029d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007526582s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.65:32327
functional_test.go:1680: http://192.168.39.65:32327: success! body:
Request served by hello-node-connect-7d85dfc575-5nkjj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.65:32327
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ef3fba5d-a9a7-42f9-8de8-50009391f66e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005080837s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-042563 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-042563 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-042563 get pvc myclaim -o=json
I1001 17:59:36.822324   13469 retry.go:31] will retry after 1.420436208s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2f51d432-beda-4dfa-81c7-98093a63be37 ResourceVersion:736 Generation:0 CreationTimestamp:2025-10-01 17:59:36 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b3d750 VolumeMode:0xc001b3d760 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-042563 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-042563 apply -f testdata/storage-provisioner/pod.yaml
I1001 17:59:38.443567   13469 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [56ec73ee-130b-44fc-8ac2-a8e21933cead] Pending
helpers_test.go:352: "sp-pod" [56ec73ee-130b-44fc-8ac2-a8e21933cead] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [56ec73ee-130b-44fc-8ac2-a8e21933cead] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006346179s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-042563 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-042563 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-042563 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8400aa94-37ed-48c7-ae04-b9994d0f0ffb] Pending
helpers_test.go:352: "sp-pod" [8400aa94-37ed-48c7-ae04-b9994d0f0ffb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/10/01 17:59:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [8400aa94-37ed-48c7-ae04-b9994d0f0ffb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004117156s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-042563 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh -n functional-042563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cp functional-042563:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2359646977/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh -n functional-042563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh -n functional-042563 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-042563 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-k459g" [25543d85-5ef2-46d8-86fd-ae3310028ac9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-k459g" [25543d85-5ef2-46d8-86fd-ae3310028ac9] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.006304551s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-042563 exec mysql-5bb876957f-k459g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-042563 exec mysql-5bb876957f-k459g -- mysql -ppassword -e "show databases;": exit status 1 (141.756519ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1001 18:00:17.005985   13469 retry.go:31] will retry after 856.885149ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-042563 exec mysql-5bb876957f-k459g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13469/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /etc/test/nested/copy/13469/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13469.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /etc/ssl/certs/13469.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13469.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /usr/share/ca-certificates/13469.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/134692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /etc/ssl/certs/134692.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/134692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /usr/share/ca-certificates/134692.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-042563 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "sudo systemctl is-active docker": exit status 1 (226.959932ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "sudo systemctl is-active containerd": exit status 1 (208.204507ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-042563 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-042563 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7pdv2" [eeb1fab9-5c26-4f22-82d7-22da6a4fdf96] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-7pdv2" [eeb1fab9-5c26-4f22-82d7-22da6a4fdf96] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006829176s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls --format short --alsologtostderr
I1001 17:59:53.758767   13469 detect.go:223] nested VM detected
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042563 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-042563
localhost/kicbase/echo-server:functional-042563
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042563 image ls --format short --alsologtostderr:
I1001 17:59:53.754709   22521 out.go:360] Setting OutFile to fd 1 ...
I1001 17:59:53.754971   22521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:53.754986   22521 out.go:374] Setting ErrFile to fd 2...
I1001 17:59:53.754993   22521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:53.755210   22521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
I1001 17:59:53.755913   22521 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:53.756025   22521 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:53.756401   22521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:53.756466   22521 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:53.770962   22521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
I1001 17:59:53.771515   22521 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:53.772160   22521 main.go:141] libmachine: Using API Version  1
I1001 17:59:53.772185   22521 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:53.772608   22521 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:53.772842   22521 main.go:141] libmachine: (functional-042563) Calling .GetState
I1001 17:59:53.775004   22521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:53.775049   22521 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:53.788542   22521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
I1001 17:59:53.789052   22521 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:53.789711   22521 main.go:141] libmachine: Using API Version  1
I1001 17:59:53.789745   22521 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:53.790124   22521 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:53.790297   22521 main.go:141] libmachine: (functional-042563) Calling .DriverName
I1001 17:59:53.790496   22521 ssh_runner.go:195] Run: systemctl --version
I1001 17:59:53.790523   22521 main.go:141] libmachine: (functional-042563) Calling .GetSSHHostname
I1001 17:59:53.794021   22521 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:53.794411   22521 main.go:141] libmachine: (functional-042563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:34:e7", ip: ""} in network mk-functional-042563: {Iface:virbr1 ExpiryTime:2025-10-01 18:56:42 +0000 UTC Type:0 Mac:52:54:00:56:34:e7 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-042563 Clientid:01:52:54:00:56:34:e7}
I1001 17:59:53.794467   22521 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined IP address 192.168.39.65 and MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:53.794597   22521 main.go:141] libmachine: (functional-042563) Calling .GetSSHPort
I1001 17:59:53.794783   22521 main.go:141] libmachine: (functional-042563) Calling .GetSSHKeyPath
I1001 17:59:53.794930   22521 main.go:141] libmachine: (functional-042563) Calling .GetSSHUsername
I1001 17:59:53.795084   22521 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/functional-042563/id_rsa Username:docker}
I1001 17:59:53.887496   22521 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 17:59:53.950240   22521 main.go:141] libmachine: Making call to close driver server
I1001 17:59:53.950253   22521 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:53.950540   22521 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:53.950558   22521 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:53.950567   22521 main.go:141] libmachine: Making call to close driver server
I1001 17:59:53.950577   22521 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:53.950812   22521 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:53.950829   22521 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:53.950836   22521 main.go:141] libmachine: (functional-042563) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042563 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest             │ 203ad09fc1566 │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-042563  │ bcc3b5f0ca115 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-042563  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042563 image ls --format table --alsologtostderr:
I1001 17:59:59.313961   22672 out.go:360] Setting OutFile to fd 1 ...
I1001 17:59:59.314217   22672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:59.314228   22672 out.go:374] Setting ErrFile to fd 2...
I1001 17:59:59.314232   22672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:59.314423   22672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
I1001 17:59:59.315017   22672 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:59.315104   22672 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:59.315481   22672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:59.315529   22672 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:59.330502   22672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
I1001 17:59:59.331154   22672 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:59.331787   22672 main.go:141] libmachine: Using API Version  1
I1001 17:59:59.331821   22672 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:59.332286   22672 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:59.332537   22672 main.go:141] libmachine: (functional-042563) Calling .GetState
I1001 17:59:59.334752   22672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:59.334805   22672 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:59.348897   22672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
I1001 17:59:59.349479   22672 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:59.350000   22672 main.go:141] libmachine: Using API Version  1
I1001 17:59:59.350025   22672 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:59.350406   22672 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:59.350630   22672 main.go:141] libmachine: (functional-042563) Calling .DriverName
I1001 17:59:59.350887   22672 ssh_runner.go:195] Run: systemctl --version
I1001 17:59:59.350912   22672 main.go:141] libmachine: (functional-042563) Calling .GetSSHHostname
I1001 17:59:59.354373   22672 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:59.354845   22672 main.go:141] libmachine: (functional-042563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:34:e7", ip: ""} in network mk-functional-042563: {Iface:virbr1 ExpiryTime:2025-10-01 18:56:42 +0000 UTC Type:0 Mac:52:54:00:56:34:e7 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-042563 Clientid:01:52:54:00:56:34:e7}
I1001 17:59:59.354877   22672 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined IP address 192.168.39.65 and MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:59.355062   22672 main.go:141] libmachine: (functional-042563) Calling .GetSSHPort
I1001 17:59:59.355246   22672 main.go:141] libmachine: (functional-042563) Calling .GetSSHKeyPath
I1001 17:59:59.355400   22672 main.go:141] libmachine: (functional-042563) Calling .GetSSHUsername
I1001 17:59:59.355574   22672 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/functional-042563/id_rsa Username:docker}
I1001 17:59:59.443752   22672 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 17:59:59.705792   22672 main.go:141] libmachine: Making call to close driver server
I1001 17:59:59.705813   22672 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:59.706204   22672 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:59.706223   22672 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:59.706234   22672 main.go:141] libmachine: Making call to close driver server
I1001 17:59:59.706242   22672 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:59.706471   22672 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:59.706489   22672 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042563 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-042563"],"size":"4945146"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e
732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"bcc3b5f0ca115b6377518b4efcbf282c12d9f989ae7932e12d5001a9b6ae4746","repoDigests":["localhost/minikube-local-cache-test@sha256:34f1fe8c318efccfafc6a1a71eed50af4270a21321e437ef46d61dcbc35507e2"],"repoTags":["localhost/minikube-local-cache-test:functional-042563"],"size":"3330"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sh
a256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9
cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d","repoDigests":["docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c","docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storag
e-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/
metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-m
inikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042563 image ls --format json --alsologtostderr:
I1001 17:59:59.052400   22648 out.go:360] Setting OutFile to fd 1 ...
I1001 17:59:59.052642   22648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:59.052650   22648 out.go:374] Setting ErrFile to fd 2...
I1001 17:59:59.052655   22648 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:59.052826   22648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
I1001 17:59:59.053387   22648 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:59.053488   22648 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:59.053823   22648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:59.053884   22648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:59.066842   22648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
I1001 17:59:59.067339   22648 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:59.067918   22648 main.go:141] libmachine: Using API Version  1
I1001 17:59:59.067946   22648 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:59.068267   22648 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:59.068466   22648 main.go:141] libmachine: (functional-042563) Calling .GetState
I1001 17:59:59.070679   22648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:59.070725   22648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:59.083824   22648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
I1001 17:59:59.084240   22648 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:59.084705   22648 main.go:141] libmachine: Using API Version  1
I1001 17:59:59.084728   22648 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:59.085180   22648 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:59.085450   22648 main.go:141] libmachine: (functional-042563) Calling .DriverName
I1001 17:59:59.085707   22648 ssh_runner.go:195] Run: systemctl --version
I1001 17:59:59.085743   22648 main.go:141] libmachine: (functional-042563) Calling .GetSSHHostname
I1001 17:59:59.089106   22648 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:59.089582   22648 main.go:141] libmachine: (functional-042563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:34:e7", ip: ""} in network mk-functional-042563: {Iface:virbr1 ExpiryTime:2025-10-01 18:56:42 +0000 UTC Type:0 Mac:52:54:00:56:34:e7 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-042563 Clientid:01:52:54:00:56:34:e7}
I1001 17:59:59.089609   22648 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined IP address 192.168.39.65 and MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:59.089858   22648 main.go:141] libmachine: (functional-042563) Calling .GetSSHPort
I1001 17:59:59.090048   22648 main.go:141] libmachine: (functional-042563) Calling .GetSSHKeyPath
I1001 17:59:59.090216   22648 main.go:141] libmachine: (functional-042563) Calling .GetSSHUsername
I1001 17:59:59.090358   22648 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/functional-042563/id_rsa Username:docker}
I1001 17:59:59.186372   22648 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 17:59:59.257764   22648 main.go:141] libmachine: Making call to close driver server
I1001 17:59:59.257776   22648 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:59.258101   22648 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:59.258120   22648 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:59.258143   22648 main.go:141] libmachine: Making call to close driver server
I1001 17:59:59.258150   22648 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:59.258400   22648 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:59.258418   22648 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:59.258459   22648 main.go:141] libmachine: (functional-042563) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042563 image ls --format yaml --alsologtostderr:
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d
repoDigests:
- docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-042563
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: bcc3b5f0ca115b6377518b4efcbf282c12d9f989ae7932e12d5001a9b6ae4746
repoDigests:
- localhost/minikube-local-cache-test@sha256:34f1fe8c318efccfafc6a1a71eed50af4270a21321e437ef46d61dcbc35507e2
repoTags:
- localhost/minikube-local-cache-test:functional-042563
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042563 image ls --format yaml --alsologtostderr:
I1001 17:59:54.005389   22545 out.go:360] Setting OutFile to fd 1 ...
I1001 17:59:54.005545   22545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:54.005556   22545 out.go:374] Setting ErrFile to fd 2...
I1001 17:59:54.005563   22545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:54.005739   22545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
I1001 17:59:54.006293   22545 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:54.006448   22545 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:54.006851   22545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:54.006920   22545 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:54.020271   22545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
I1001 17:59:54.020712   22545 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:54.021192   22545 main.go:141] libmachine: Using API Version  1
I1001 17:59:54.021221   22545 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:54.021595   22545 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:54.021805   22545 main.go:141] libmachine: (functional-042563) Calling .GetState
I1001 17:59:54.023875   22545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:54.023914   22545 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:54.036836   22545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
I1001 17:59:54.037223   22545 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:54.037672   22545 main.go:141] libmachine: Using API Version  1
I1001 17:59:54.037698   22545 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:54.038022   22545 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:54.038196   22545 main.go:141] libmachine: (functional-042563) Calling .DriverName
I1001 17:59:54.038395   22545 ssh_runner.go:195] Run: systemctl --version
I1001 17:59:54.038418   22545 main.go:141] libmachine: (functional-042563) Calling .GetSSHHostname
I1001 17:59:54.041378   22545 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:54.041780   22545 main.go:141] libmachine: (functional-042563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:34:e7", ip: ""} in network mk-functional-042563: {Iface:virbr1 ExpiryTime:2025-10-01 18:56:42 +0000 UTC Type:0 Mac:52:54:00:56:34:e7 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-042563 Clientid:01:52:54:00:56:34:e7}
I1001 17:59:54.041804   22545 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined IP address 192.168.39.65 and MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:54.042000   22545 main.go:141] libmachine: (functional-042563) Calling .GetSSHPort
I1001 17:59:54.042146   22545 main.go:141] libmachine: (functional-042563) Calling .GetSSHKeyPath
I1001 17:59:54.042272   22545 main.go:141] libmachine: (functional-042563) Calling .GetSSHUsername
I1001 17:59:54.042410   22545 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/functional-042563/id_rsa Username:docker}
I1001 17:59:54.163383   22545 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 17:59:54.255642   22545 main.go:141] libmachine: Making call to close driver server
I1001 17:59:54.255654   22545 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:54.255921   22545 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:54.255941   22545 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:54.255953   22545 main.go:141] libmachine: (functional-042563) DBG | Closing plugin on server side
I1001 17:59:54.255973   22545 main.go:141] libmachine: Making call to close driver server
I1001 17:59:54.255986   22545 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 17:59:54.256208   22545 main.go:141] libmachine: Successfully made call to close driver server
I1001 17:59:54.256228   22545 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 17:59:54.256306   22545 main.go:141] libmachine: (functional-042563) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh pgrep buildkitd: exit status 1 (219.819029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image build -t localhost/my-image:functional-042563 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 image build -t localhost/my-image:functional-042563 testdata/build --alsologtostderr: (5.73473027s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042563 image build -t localhost/my-image:functional-042563 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1ca02ffb42c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-042563
--> 150a99bd950
Successfully tagged localhost/my-image:functional-042563
150a99bd950fe7c4c5273f48826abe4140b742da2020c534ee7baf06a287191e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042563 image build -t localhost/my-image:functional-042563 testdata/build --alsologtostderr:
I1001 17:59:54.536340   22598 out.go:360] Setting OutFile to fd 1 ...
I1001 17:59:54.536531   22598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:54.536545   22598 out.go:374] Setting ErrFile to fd 2...
I1001 17:59:54.536552   22598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 17:59:54.536809   22598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
I1001 17:59:54.537594   22598 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:54.538327   22598 config.go:182] Loaded profile config "functional-042563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 17:59:54.538931   22598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:54.538978   22598 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:54.552006   22598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
I1001 17:59:54.552473   22598 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:54.552977   22598 main.go:141] libmachine: Using API Version  1
I1001 17:59:54.553005   22598 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:54.553392   22598 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:54.553594   22598 main.go:141] libmachine: (functional-042563) Calling .GetState
I1001 17:59:54.555527   22598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 17:59:54.555577   22598 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 17:59:54.569008   22598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
I1001 17:59:54.569552   22598 main.go:141] libmachine: () Calling .GetVersion
I1001 17:59:54.569951   22598 main.go:141] libmachine: Using API Version  1
I1001 17:59:54.569991   22598 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 17:59:54.570313   22598 main.go:141] libmachine: () Calling .GetMachineName
I1001 17:59:54.570501   22598 main.go:141] libmachine: (functional-042563) Calling .DriverName
I1001 17:59:54.570687   22598 ssh_runner.go:195] Run: systemctl --version
I1001 17:59:54.570711   22598 main.go:141] libmachine: (functional-042563) Calling .GetSSHHostname
I1001 17:59:54.573594   22598 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:54.574044   22598 main.go:141] libmachine: (functional-042563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:34:e7", ip: ""} in network mk-functional-042563: {Iface:virbr1 ExpiryTime:2025-10-01 18:56:42 +0000 UTC Type:0 Mac:52:54:00:56:34:e7 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-042563 Clientid:01:52:54:00:56:34:e7}
I1001 17:59:54.574071   22598 main.go:141] libmachine: (functional-042563) DBG | domain functional-042563 has defined IP address 192.168.39.65 and MAC address 52:54:00:56:34:e7 in network mk-functional-042563
I1001 17:59:54.574232   22598 main.go:141] libmachine: (functional-042563) Calling .GetSSHPort
I1001 17:59:54.574384   22598 main.go:141] libmachine: (functional-042563) Calling .GetSSHKeyPath
I1001 17:59:54.574673   22598 main.go:141] libmachine: (functional-042563) Calling .GetSSHUsername
I1001 17:59:54.574819   22598 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/functional-042563/id_rsa Username:docker}
I1001 17:59:54.664887   22598 build_images.go:161] Building image from path: /tmp/build.702024635.tar
I1001 17:59:54.664971   22598 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 17:59:54.680703   22598 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.702024635.tar
I1001 17:59:54.686231   22598 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.702024635.tar: stat -c "%s %y" /var/lib/minikube/build/build.702024635.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.702024635.tar': No such file or directory
I1001 17:59:54.686264   22598 ssh_runner.go:362] scp /tmp/build.702024635.tar --> /var/lib/minikube/build/build.702024635.tar (3072 bytes)
I1001 17:59:54.725032   22598 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.702024635
I1001 17:59:54.738953   22598 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.702024635 -xf /var/lib/minikube/build/build.702024635.tar
I1001 17:59:54.754087   22598 crio.go:315] Building image: /var/lib/minikube/build/build.702024635
I1001 17:59:54.754172   22598 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-042563 /var/lib/minikube/build/build.702024635 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1001 18:00:00.166098   22598 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-042563 /var/lib/minikube/build/build.702024635 --cgroup-manager=cgroupfs: (5.411893007s)
I1001 18:00:00.166174   22598 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.702024635
I1001 18:00:00.189014   22598 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.702024635.tar
I1001 18:00:00.209521   22598 build_images.go:217] Built localhost/my-image:functional-042563 from /tmp/build.702024635.tar
I1001 18:00:00.209568   22598 build_images.go:133] succeeded building to: functional-042563
I1001 18:00:00.209572   22598 build_images.go:134] failed building to: 
I1001 18:00:00.209627   22598 main.go:141] libmachine: Making call to close driver server
I1001 18:00:00.209660   22598 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 18:00:00.209977   22598 main.go:141] libmachine: Successfully made call to close driver server
I1001 18:00:00.209997   22598 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 18:00:00.210005   22598 main.go:141] libmachine: Making call to close driver server
I1001 18:00:00.210013   22598 main.go:141] libmachine: (functional-042563) Calling .Close
I1001 18:00:00.210372   22598 main.go:141] libmachine: (functional-042563) DBG | Closing plugin on server side
I1001 18:00:00.210388   22598 main.go:141] libmachine: Successfully made call to close driver server
I1001 18:00:00.210401   22598 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.73482439s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-042563
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image load --daemon kicbase/echo-server:functional-042563 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-042563 image load --daemon kicbase/echo-server:functional-042563 --alsologtostderr: (1.288965992s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "328.893046ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "47.668569ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "301.353371ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.860933ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image load --daemon kicbase/echo-server:functional-042563 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdany-port910525412/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759341573469028312" to /tmp/TestFunctionalparallelMountCmdany-port910525412/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759341573469028312" to /tmp/TestFunctionalparallelMountCmdany-port910525412/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759341573469028312" to /tmp/TestFunctionalparallelMountCmdany-port910525412/001/test-1759341573469028312
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.203015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 17:59:33.665535   13469 retry.go:31] will retry after 601.172421ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 17:59 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 17:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 17:59 test-1759341573469028312
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh cat /mount-9p/test-1759341573469028312
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-042563 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d7b1479c-7c04-4ba5-92df-9445db4abd5e] Pending
helpers_test.go:352: "busybox-mount" [d7b1479c-7c04-4ba5-92df-9445db4abd5e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d7b1479c-7c04-4ba5-92df-9445db4abd5e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d7b1479c-7c04-4ba5-92df-9445db4abd5e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.009594803s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-042563 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdany-port910525412/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-042563
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image load --daemon kicbase/echo-server:functional-042563 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image save kicbase/echo-server:functional-042563 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image rm kicbase/echo-server:functional-042563 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-042563
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 image save --daemon kicbase/echo-server:functional-042563 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-042563
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service list -o json
functional_test.go:1504: Took "253.193196ms" to run "out/minikube-linux-amd64 -p functional-042563 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.65:30539
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.65:30539
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdspecific-port1133599534/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (236.021957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 17:59:42.343357   13469 retry.go:31] will retry after 376.554999ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdspecific-port1133599534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "sudo umount -f /mount-9p": exit status 1 (222.488943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-042563 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdspecific-port1133599534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T" /mount1: exit status 1 (244.431012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 17:59:44.031083   13469 retry.go:31] will retry after 478.570693ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042563 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-042563 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042563 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1425261866/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-042563
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-042563
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-042563
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (241.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:01:06.820449   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:01:34.525235   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4m0.462625857s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (241.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 kubectl -- rollout status deployment/busybox: (5.034721294s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-qkt6x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-vw27l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-xwgx6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-qkt6x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-vw27l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-xwgx6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-qkt6x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-vw27l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-xwgx6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-qkt6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-qkt6x -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-vw27l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-vw27l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-xwgx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 kubectl -- exec busybox-7b57f96db7-xwgx6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node add --alsologtostderr -v 5
E1001 18:04:29.919196   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:29.925593   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:29.936954   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:29.958336   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:29.999788   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:30.081216   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:30.242726   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:30.564387   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:31.206070   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:32.487912   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:35.049388   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:40.171831   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:04:50.413762   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:05:10.895862   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 node add --alsologtostderr -v 5: (43.210797023s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-615367 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp testdata/cp-test.txt ha-615367:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130602337/001/cp-test_ha-615367.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367:/home/docker/cp-test.txt ha-615367-m02:/home/docker/cp-test_ha-615367_ha-615367-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test_ha-615367_ha-615367-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367:/home/docker/cp-test.txt ha-615367-m03:/home/docker/cp-test_ha-615367_ha-615367-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test_ha-615367_ha-615367-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367:/home/docker/cp-test.txt ha-615367-m04:/home/docker/cp-test_ha-615367_ha-615367-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test_ha-615367_ha-615367-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp testdata/cp-test.txt ha-615367-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130602337/001/cp-test_ha-615367-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m02:/home/docker/cp-test.txt ha-615367:/home/docker/cp-test_ha-615367-m02_ha-615367.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test_ha-615367-m02_ha-615367.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m02:/home/docker/cp-test.txt ha-615367-m03:/home/docker/cp-test_ha-615367-m02_ha-615367-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test_ha-615367-m02_ha-615367-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m02:/home/docker/cp-test.txt ha-615367-m04:/home/docker/cp-test_ha-615367-m02_ha-615367-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test_ha-615367-m02_ha-615367-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp testdata/cp-test.txt ha-615367-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130602337/001/cp-test_ha-615367-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m03:/home/docker/cp-test.txt ha-615367:/home/docker/cp-test_ha-615367-m03_ha-615367.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test_ha-615367-m03_ha-615367.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m03:/home/docker/cp-test.txt ha-615367-m02:/home/docker/cp-test_ha-615367-m03_ha-615367-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test_ha-615367-m03_ha-615367-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m03:/home/docker/cp-test.txt ha-615367-m04:/home/docker/cp-test_ha-615367-m03_ha-615367-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test_ha-615367-m03_ha-615367-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp testdata/cp-test.txt ha-615367-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130602337/001/cp-test_ha-615367-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m04:/home/docker/cp-test.txt ha-615367:/home/docker/cp-test_ha-615367-m04_ha-615367.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367 "sudo cat /home/docker/cp-test_ha-615367-m04_ha-615367.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m04:/home/docker/cp-test.txt ha-615367-m02:/home/docker/cp-test_ha-615367-m04_ha-615367-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m02 "sudo cat /home/docker/cp-test_ha-615367-m04_ha-615367-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 cp ha-615367-m04:/home/docker/cp-test.txt ha-615367-m03:/home/docker/cp-test_ha-615367-m04_ha-615367-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 ssh -n ha-615367-m03 "sudo cat /home/docker/cp-test_ha-615367-m04_ha-615367-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (72.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node stop m02 --alsologtostderr -v 5
E1001 18:05:51.857564   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:06:06.819622   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 node stop m02 --alsologtostderr -v 5: (1m12.316052742s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5: exit status 7 (657.165184ms)

                                                
                                                
-- stdout --
	ha-615367
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-615367-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615367-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-615367-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:06:38.621500   27483 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:06:38.621772   27483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:06:38.621784   27483 out.go:374] Setting ErrFile to fd 2...
	I1001 18:06:38.621789   27483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:06:38.621969   27483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:06:38.622132   27483 out.go:368] Setting JSON to false
	I1001 18:06:38.622155   27483 mustload.go:65] Loading cluster: ha-615367
	I1001 18:06:38.622288   27483 notify.go:220] Checking for updates...
	I1001 18:06:38.622635   27483 config.go:182] Loaded profile config "ha-615367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:06:38.622652   27483 status.go:174] checking status of ha-615367 ...
	I1001 18:06:38.623150   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.623192   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.643681   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1001 18:06:38.644247   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.644888   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.644921   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.645309   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.645556   27483 main.go:141] libmachine: (ha-615367) Calling .GetState
	I1001 18:06:38.647572   27483 status.go:371] ha-615367 host status = "Running" (err=<nil>)
	I1001 18:06:38.647586   27483 host.go:66] Checking if "ha-615367" exists ...
	I1001 18:06:38.647941   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.647996   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.661999   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I1001 18:06:38.662420   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.662867   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.662890   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.663265   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.663462   27483 main.go:141] libmachine: (ha-615367) Calling .GetIP
	I1001 18:06:38.667101   27483 main.go:141] libmachine: (ha-615367) DBG | domain ha-615367 has defined MAC address 52:54:00:ce:d1:6d in network mk-ha-615367
	I1001 18:06:38.667632   27483 main.go:141] libmachine: (ha-615367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d1:6d", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:00:34 +0000 UTC Type:0 Mac:52:54:00:ce:d1:6d Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-615367 Clientid:01:52:54:00:ce:d1:6d}
	I1001 18:06:38.667658   27483 main.go:141] libmachine: (ha-615367) DBG | domain ha-615367 has defined IP address 192.168.39.250 and MAC address 52:54:00:ce:d1:6d in network mk-ha-615367
	I1001 18:06:38.667827   27483 host.go:66] Checking if "ha-615367" exists ...
	I1001 18:06:38.668100   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.668133   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.681046   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1001 18:06:38.681493   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.681989   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.682031   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.682392   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.682618   27483 main.go:141] libmachine: (ha-615367) Calling .DriverName
	I1001 18:06:38.682804   27483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:06:38.682825   27483 main.go:141] libmachine: (ha-615367) Calling .GetSSHHostname
	I1001 18:06:38.685815   27483 main.go:141] libmachine: (ha-615367) DBG | domain ha-615367 has defined MAC address 52:54:00:ce:d1:6d in network mk-ha-615367
	I1001 18:06:38.686294   27483 main.go:141] libmachine: (ha-615367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d1:6d", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:00:34 +0000 UTC Type:0 Mac:52:54:00:ce:d1:6d Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-615367 Clientid:01:52:54:00:ce:d1:6d}
	I1001 18:06:38.686316   27483 main.go:141] libmachine: (ha-615367) DBG | domain ha-615367 has defined IP address 192.168.39.250 and MAC address 52:54:00:ce:d1:6d in network mk-ha-615367
	I1001 18:06:38.686518   27483 main.go:141] libmachine: (ha-615367) Calling .GetSSHPort
	I1001 18:06:38.686695   27483 main.go:141] libmachine: (ha-615367) Calling .GetSSHKeyPath
	I1001 18:06:38.686838   27483 main.go:141] libmachine: (ha-615367) Calling .GetSSHUsername
	I1001 18:06:38.686969   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/ha-615367/id_rsa Username:docker}
	I1001 18:06:38.777800   27483 ssh_runner.go:195] Run: systemctl --version
	I1001 18:06:38.785178   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:06:38.805728   27483 kubeconfig.go:125] found "ha-615367" server: "https://192.168.39.254:8443"
	I1001 18:06:38.805775   27483 api_server.go:166] Checking apiserver status ...
	I1001 18:06:38.805813   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:06:38.829241   27483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	W1001 18:06:38.840868   27483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:06:38.840930   27483 ssh_runner.go:195] Run: ls
	I1001 18:06:38.846050   27483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1001 18:06:38.853886   27483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1001 18:06:38.853910   27483 status.go:463] ha-615367 apiserver status = Running (err=<nil>)
	I1001 18:06:38.853922   27483 status.go:176] ha-615367 status: &{Name:ha-615367 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:06:38.853939   27483 status.go:174] checking status of ha-615367-m02 ...
	I1001 18:06:38.854334   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.854380   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.867739   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I1001 18:06:38.868226   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.868787   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.868823   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.869152   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.869345   27483 main.go:141] libmachine: (ha-615367-m02) Calling .GetState
	I1001 18:06:38.871367   27483 status.go:371] ha-615367-m02 host status = "Stopped" (err=<nil>)
	I1001 18:06:38.871382   27483 status.go:384] host is not running, skipping remaining checks
	I1001 18:06:38.871389   27483 status.go:176] ha-615367-m02 status: &{Name:ha-615367-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:06:38.871421   27483 status.go:174] checking status of ha-615367-m03 ...
	I1001 18:06:38.871722   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.871764   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.884161   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I1001 18:06:38.884600   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.885064   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.885086   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.885393   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.885653   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetState
	I1001 18:06:38.887324   27483 status.go:371] ha-615367-m03 host status = "Running" (err=<nil>)
	I1001 18:06:38.887339   27483 host.go:66] Checking if "ha-615367-m03" exists ...
	I1001 18:06:38.887633   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.887666   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.900389   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I1001 18:06:38.900817   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.901262   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.901282   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.901591   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.901784   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetIP
	I1001 18:06:38.905026   27483 main.go:141] libmachine: (ha-615367-m03) DBG | domain ha-615367-m03 has defined MAC address 52:54:00:99:a1:01 in network mk-ha-615367
	I1001 18:06:38.905531   27483 main.go:141] libmachine: (ha-615367-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a1:01", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:03:11 +0000 UTC Type:0 Mac:52:54:00:99:a1:01 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-615367-m03 Clientid:01:52:54:00:99:a1:01}
	I1001 18:06:38.905554   27483 main.go:141] libmachine: (ha-615367-m03) DBG | domain ha-615367-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:99:a1:01 in network mk-ha-615367
	I1001 18:06:38.905743   27483 host.go:66] Checking if "ha-615367-m03" exists ...
	I1001 18:06:38.906045   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:38.906090   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:38.919270   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I1001 18:06:38.919831   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:38.920328   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:38.920348   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:38.920728   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:38.920914   27483 main.go:141] libmachine: (ha-615367-m03) Calling .DriverName
	I1001 18:06:38.921132   27483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:06:38.921151   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetSSHHostname
	I1001 18:06:38.924341   27483 main.go:141] libmachine: (ha-615367-m03) DBG | domain ha-615367-m03 has defined MAC address 52:54:00:99:a1:01 in network mk-ha-615367
	I1001 18:06:38.924850   27483 main.go:141] libmachine: (ha-615367-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a1:01", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:03:11 +0000 UTC Type:0 Mac:52:54:00:99:a1:01 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-615367-m03 Clientid:01:52:54:00:99:a1:01}
	I1001 18:06:38.924894   27483 main.go:141] libmachine: (ha-615367-m03) DBG | domain ha-615367-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:99:a1:01 in network mk-ha-615367
	I1001 18:06:38.925025   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetSSHPort
	I1001 18:06:38.925179   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetSSHKeyPath
	I1001 18:06:38.925350   27483 main.go:141] libmachine: (ha-615367-m03) Calling .GetSSHUsername
	I1001 18:06:38.925503   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/ha-615367-m03/id_rsa Username:docker}
	I1001 18:06:39.007130   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:06:39.025323   27483 kubeconfig.go:125] found "ha-615367" server: "https://192.168.39.254:8443"
	I1001 18:06:39.025359   27483 api_server.go:166] Checking apiserver status ...
	I1001 18:06:39.025420   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:06:39.046716   27483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1810/cgroup
	W1001 18:06:39.057946   27483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1810/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:06:39.058004   27483 ssh_runner.go:195] Run: ls
	I1001 18:06:39.063420   27483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1001 18:06:39.068800   27483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1001 18:06:39.068822   27483 status.go:463] ha-615367-m03 apiserver status = Running (err=<nil>)
	I1001 18:06:39.068830   27483 status.go:176] ha-615367-m03 status: &{Name:ha-615367-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:06:39.068843   27483 status.go:174] checking status of ha-615367-m04 ...
	I1001 18:06:39.069122   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:39.069155   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:39.082289   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I1001 18:06:39.082772   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:39.083191   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:39.083214   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:39.083655   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:39.083848   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetState
	I1001 18:06:39.085752   27483 status.go:371] ha-615367-m04 host status = "Running" (err=<nil>)
	I1001 18:06:39.085772   27483 host.go:66] Checking if "ha-615367-m04" exists ...
	I1001 18:06:39.086182   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:39.086214   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:39.099662   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I1001 18:06:39.100080   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:39.100491   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:39.100516   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:39.100829   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:39.101015   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetIP
	I1001 18:06:39.103891   27483 main.go:141] libmachine: (ha-615367-m04) DBG | domain ha-615367-m04 has defined MAC address 52:54:00:59:9a:65 in network mk-ha-615367
	I1001 18:06:39.104358   27483 main.go:141] libmachine: (ha-615367-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9a:65", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:04:44 +0000 UTC Type:0 Mac:52:54:00:59:9a:65 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-615367-m04 Clientid:01:52:54:00:59:9a:65}
	I1001 18:06:39.104396   27483 main.go:141] libmachine: (ha-615367-m04) DBG | domain ha-615367-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:59:9a:65 in network mk-ha-615367
	I1001 18:06:39.104577   27483 host.go:66] Checking if "ha-615367-m04" exists ...
	I1001 18:06:39.104960   27483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:06:39.105001   27483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:06:39.118895   27483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I1001 18:06:39.119338   27483 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:06:39.119744   27483 main.go:141] libmachine: Using API Version  1
	I1001 18:06:39.119764   27483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:06:39.120172   27483 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:06:39.120386   27483 main.go:141] libmachine: (ha-615367-m04) Calling .DriverName
	I1001 18:06:39.120597   27483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:06:39.120622   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetSSHHostname
	I1001 18:06:39.123771   27483 main.go:141] libmachine: (ha-615367-m04) DBG | domain ha-615367-m04 has defined MAC address 52:54:00:59:9a:65 in network mk-ha-615367
	I1001 18:06:39.124260   27483 main.go:141] libmachine: (ha-615367-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9a:65", ip: ""} in network mk-ha-615367: {Iface:virbr1 ExpiryTime:2025-10-01 19:04:44 +0000 UTC Type:0 Mac:52:54:00:59:9a:65 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-615367-m04 Clientid:01:52:54:00:59:9a:65}
	I1001 18:06:39.124285   27483 main.go:141] libmachine: (ha-615367-m04) DBG | domain ha-615367-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:59:9a:65 in network mk-ha-615367
	I1001 18:06:39.124465   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetSSHPort
	I1001 18:06:39.124642   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetSSHKeyPath
	I1001 18:06:39.124811   27483 main.go:141] libmachine: (ha-615367-m04) Calling .GetSSHUsername
	I1001 18:06:39.124963   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/ha-615367-m04/id_rsa Username:docker}
	I1001 18:06:39.206034   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:06:39.225458   27483 status.go:176] ha-615367-m04 status: &{Name:ha-615367-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (72.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node start m02 --alsologtostderr -v 5
E1001 18:07:13.779606   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 node start m02 --alsologtostderr -v 5: (42.133512771s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 stop --alsologtostderr -v 5
E1001 18:09:29.921516   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:09:57.621588   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:11:06.822471   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 stop --alsologtostderr -v 5: (4m7.13686816s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 start --wait true --alsologtostderr -v 5
E1001 18:12:29.886700   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 start --wait true --alsologtostderr -v 5: (2m4.201035003s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 node delete m03 --alsologtostderr -v 5: (17.681483022s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (255.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 stop --alsologtostderr -v 5
E1001 18:14:29.919506   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:16:06.816172   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 stop --alsologtostderr -v 5: (4m15.027661131s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5: exit status 7 (95.262529ms)

                                                
                                                
-- stdout --
	ha-615367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615367-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615367-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:18:09.523733   31958 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:18:09.523973   31958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:18:09.523981   31958 out.go:374] Setting ErrFile to fd 2...
	I1001 18:18:09.523984   31958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:18:09.524162   31958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:18:09.524329   31958 out.go:368] Setting JSON to false
	I1001 18:18:09.524352   31958 mustload.go:65] Loading cluster: ha-615367
	I1001 18:18:09.524459   31958 notify.go:220] Checking for updates...
	I1001 18:18:09.524811   31958 config.go:182] Loaded profile config "ha-615367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:18:09.524829   31958 status.go:174] checking status of ha-615367 ...
	I1001 18:18:09.525270   31958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:18:09.525326   31958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:18:09.538655   31958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I1001 18:18:09.539181   31958 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:18:09.539803   31958 main.go:141] libmachine: Using API Version  1
	I1001 18:18:09.539829   31958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:18:09.540246   31958 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:18:09.540484   31958 main.go:141] libmachine: (ha-615367) Calling .GetState
	I1001 18:18:09.542367   31958 status.go:371] ha-615367 host status = "Stopped" (err=<nil>)
	I1001 18:18:09.542380   31958 status.go:384] host is not running, skipping remaining checks
	I1001 18:18:09.542385   31958 status.go:176] ha-615367 status: &{Name:ha-615367 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:18:09.542442   31958 status.go:174] checking status of ha-615367-m02 ...
	I1001 18:18:09.542755   31958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:18:09.542799   31958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:18:09.555720   31958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1001 18:18:09.556091   31958 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:18:09.556507   31958 main.go:141] libmachine: Using API Version  1
	I1001 18:18:09.556530   31958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:18:09.556868   31958 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:18:09.557066   31958 main.go:141] libmachine: (ha-615367-m02) Calling .GetState
	I1001 18:18:09.558680   31958 status.go:371] ha-615367-m02 host status = "Stopped" (err=<nil>)
	I1001 18:18:09.558694   31958 status.go:384] host is not running, skipping remaining checks
	I1001 18:18:09.558700   31958 status.go:176] ha-615367-m02 status: &{Name:ha-615367-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:18:09.558720   31958 status.go:174] checking status of ha-615367-m04 ...
	I1001 18:18:09.558993   31958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:18:09.559058   31958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:18:09.571692   31958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I1001 18:18:09.572179   31958 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:18:09.572578   31958 main.go:141] libmachine: Using API Version  1
	I1001 18:18:09.572606   31958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:18:09.572964   31958 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:18:09.573141   31958 main.go:141] libmachine: (ha-615367-m04) Calling .GetState
	I1001 18:18:09.574747   31958 status.go:371] ha-615367-m04 host status = "Stopped" (err=<nil>)
	I1001 18:18:09.574763   31958 status.go:384] host is not running, skipping remaining checks
	I1001 18:18:09.574770   31958 status.go:176] ha-615367-m04 status: &{Name:ha-615367-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (255.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:19:29.919328   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.586024715s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 node add --control-plane --alsologtostderr -v 5
E1001 18:20:52.983727   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:21:06.816876   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-615367 node add --control-plane --alsologtostderr -v 5: (1m15.251534223s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-615367 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-984966 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-984966 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.115731394s)
--- PASS: TestJSONOutput/start/Command (85.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-984966 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-984966 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-984966 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-984966 --output=json --user=testUser: (7.582535125s)
--- PASS: TestJSONOutput/stop/Command (7.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-179466 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-179466 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.672336ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6c83e1c-5326-4aa6-9976-7fcd17d8a341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-179466] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25029b35-d6dd-4e8b-a2e3-e3020bbf1f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21631"}}
	{"specversion":"1.0","id":"a5428af5-00dd-414d-90a9-e4f93ddbc2cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8720976c-805d-4074-89a0-9e47fe550d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig"}}
	{"specversion":"1.0","id":"15b05b87-81d5-4725-a912-b1c8ba1a9c1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube"}}
	{"specversion":"1.0","id":"89069fbe-a8d0-4544-b731-813dde66ebda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"78e57a81-5bf8-4ccf-ac0d-988b7d32a545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"213302f2-d113-4831-909d-c760f90d7c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-179466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-179466
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (78.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-850172 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-850172 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.720729322s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-859606 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-859606 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.947755223s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-850172
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-859606
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-859606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-859606
helpers_test.go:175: Cleaning up "first-850172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-850172
--- PASS: TestMinikubeProfile (78.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-192699 --memory=3072 --mount-string /tmp/TestMountStartserial3220108181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-192699 --memory=3072 --mount-string /tmp/TestMountStartserial3220108181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.169383312s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-192699 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-192699 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-205862 --memory=3072 --mount-string /tmp/TestMountStartserial3220108181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:24:29.918801   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-205862 --memory=3072 --mount-string /tmp/TestMountStartserial3220108181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.392015479s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-192699 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-205862
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-205862: (1.200622154s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-205862
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-205862: (18.77286206s)
--- PASS: TestMountStart/serial/RestartStopped (19.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-205862 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-388877 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:26:06.816883   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-388877 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m7.190050288s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-388877 -- rollout status deployment/busybox: (4.25434981s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-9q5md -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-mgzdp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-9q5md -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-mgzdp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-9q5md -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-mgzdp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-9q5md -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-9q5md -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-mgzdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-388877 -- exec busybox-7b57f96db7-mgzdp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-388877 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-388877 -v=5 --alsologtostderr: (41.728919263s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-388877 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp testdata/cp-test.txt multinode-388877:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122812455/001/cp-test_multinode-388877.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877:/home/docker/cp-test.txt multinode-388877-m02:/home/docker/cp-test_multinode-388877_multinode-388877-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test_multinode-388877_multinode-388877-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877:/home/docker/cp-test.txt multinode-388877-m03:/home/docker/cp-test_multinode-388877_multinode-388877-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test_multinode-388877_multinode-388877-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp testdata/cp-test.txt multinode-388877-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122812455/001/cp-test_multinode-388877-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m02:/home/docker/cp-test.txt multinode-388877:/home/docker/cp-test_multinode-388877-m02_multinode-388877.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test_multinode-388877-m02_multinode-388877.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m02:/home/docker/cp-test.txt multinode-388877-m03:/home/docker/cp-test_multinode-388877-m02_multinode-388877-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test_multinode-388877-m02_multinode-388877-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp testdata/cp-test.txt multinode-388877-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122812455/001/cp-test_multinode-388877-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m03:/home/docker/cp-test.txt multinode-388877:/home/docker/cp-test_multinode-388877-m03_multinode-388877.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877 "sudo cat /home/docker/cp-test_multinode-388877-m03_multinode-388877.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 cp multinode-388877-m03:/home/docker/cp-test.txt multinode-388877-m02:/home/docker/cp-test_multinode-388877-m03_multinode-388877-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 ssh -n multinode-388877-m02 "sudo cat /home/docker/cp-test_multinode-388877-m03_multinode-388877-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-388877 node stop m03: (1.565687356s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-388877 status: exit status 7 (433.249508ms)

                                                
                                                
-- stdout --
	multinode-388877
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-388877-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-388877-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr: exit status 7 (417.136452ms)

                                                
                                                
-- stdout --
	multinode-388877
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-388877-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-388877-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:28:22.979595   39703 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:28:22.979857   39703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:28:22.979866   39703 out.go:374] Setting ErrFile to fd 2...
	I1001 18:28:22.979871   39703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:28:22.980072   39703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:28:22.980262   39703 out.go:368] Setting JSON to false
	I1001 18:28:22.980285   39703 mustload.go:65] Loading cluster: multinode-388877
	I1001 18:28:22.980410   39703 notify.go:220] Checking for updates...
	I1001 18:28:22.980676   39703 config.go:182] Loaded profile config "multinode-388877": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:28:22.980691   39703 status.go:174] checking status of multinode-388877 ...
	I1001 18:28:22.981207   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:22.981267   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:22.994918   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I1001 18:28:22.995344   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:22.995956   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:22.995991   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:22.996326   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:22.996530   39703 main.go:141] libmachine: (multinode-388877) Calling .GetState
	I1001 18:28:22.998230   39703 status.go:371] multinode-388877 host status = "Running" (err=<nil>)
	I1001 18:28:22.998254   39703 host.go:66] Checking if "multinode-388877" exists ...
	I1001 18:28:22.998645   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:22.998702   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.013597   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I1001 18:28:23.014096   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.014556   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.014574   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.014894   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.015080   39703 main.go:141] libmachine: (multinode-388877) Calling .GetIP
	I1001 18:28:23.018343   39703 main.go:141] libmachine: (multinode-388877) DBG | domain multinode-388877 has defined MAC address 52:54:00:15:7a:34 in network mk-multinode-388877
	I1001 18:28:23.018834   39703 main.go:141] libmachine: (multinode-388877) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:7a:34", ip: ""} in network mk-multinode-388877: {Iface:virbr1 ExpiryTime:2025-10-01 19:25:31 +0000 UTC Type:0 Mac:52:54:00:15:7a:34 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-388877 Clientid:01:52:54:00:15:7a:34}
	I1001 18:28:23.018871   39703 main.go:141] libmachine: (multinode-388877) DBG | domain multinode-388877 has defined IP address 192.168.39.187 and MAC address 52:54:00:15:7a:34 in network mk-multinode-388877
	I1001 18:28:23.018980   39703 host.go:66] Checking if "multinode-388877" exists ...
	I1001 18:28:23.019264   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:23.019297   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.032540   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I1001 18:28:23.033000   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.033491   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.033516   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.033873   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.034053   39703 main.go:141] libmachine: (multinode-388877) Calling .DriverName
	I1001 18:28:23.034252   39703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:28:23.034272   39703 main.go:141] libmachine: (multinode-388877) Calling .GetSSHHostname
	I1001 18:28:23.037857   39703 main.go:141] libmachine: (multinode-388877) DBG | domain multinode-388877 has defined MAC address 52:54:00:15:7a:34 in network mk-multinode-388877
	I1001 18:28:23.038395   39703 main.go:141] libmachine: (multinode-388877) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:7a:34", ip: ""} in network mk-multinode-388877: {Iface:virbr1 ExpiryTime:2025-10-01 19:25:31 +0000 UTC Type:0 Mac:52:54:00:15:7a:34 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-388877 Clientid:01:52:54:00:15:7a:34}
	I1001 18:28:23.038418   39703 main.go:141] libmachine: (multinode-388877) DBG | domain multinode-388877 has defined IP address 192.168.39.187 and MAC address 52:54:00:15:7a:34 in network mk-multinode-388877
	I1001 18:28:23.038644   39703 main.go:141] libmachine: (multinode-388877) Calling .GetSSHPort
	I1001 18:28:23.038848   39703 main.go:141] libmachine: (multinode-388877) Calling .GetSSHKeyPath
	I1001 18:28:23.038996   39703 main.go:141] libmachine: (multinode-388877) Calling .GetSSHUsername
	I1001 18:28:23.039157   39703 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/multinode-388877/id_rsa Username:docker}
	I1001 18:28:23.117521   39703 ssh_runner.go:195] Run: systemctl --version
	I1001 18:28:23.123858   39703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:28:23.140539   39703 kubeconfig.go:125] found "multinode-388877" server: "https://192.168.39.187:8443"
	I1001 18:28:23.140602   39703 api_server.go:166] Checking apiserver status ...
	I1001 18:28:23.140634   39703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:28:23.159888   39703 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	W1001 18:28:23.172142   39703 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:28:23.172196   39703 ssh_runner.go:195] Run: ls
	I1001 18:28:23.177477   39703 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1001 18:28:23.182124   39703 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1001 18:28:23.182150   39703 status.go:463] multinode-388877 apiserver status = Running (err=<nil>)
	I1001 18:28:23.182165   39703 status.go:176] multinode-388877 status: &{Name:multinode-388877 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:28:23.182189   39703 status.go:174] checking status of multinode-388877-m02 ...
	I1001 18:28:23.182564   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:23.182598   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.196062   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I1001 18:28:23.196540   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.197012   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.197034   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.197323   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.197496   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetState
	I1001 18:28:23.199012   39703 status.go:371] multinode-388877-m02 host status = "Running" (err=<nil>)
	I1001 18:28:23.199027   39703 host.go:66] Checking if "multinode-388877-m02" exists ...
	I1001 18:28:23.199318   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:23.199350   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.212765   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1001 18:28:23.213261   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.214009   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.214030   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.214453   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.214660   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetIP
	I1001 18:28:23.217523   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | domain multinode-388877-m02 has defined MAC address 52:54:00:f1:85:46 in network mk-multinode-388877
	I1001 18:28:23.218026   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:85:46", ip: ""} in network mk-multinode-388877: {Iface:virbr1 ExpiryTime:2025-10-01 19:26:56 +0000 UTC Type:0 Mac:52:54:00:f1:85:46 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-388877-m02 Clientid:01:52:54:00:f1:85:46}
	I1001 18:28:23.218047   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | domain multinode-388877-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:f1:85:46 in network mk-multinode-388877
	I1001 18:28:23.218257   39703 host.go:66] Checking if "multinode-388877-m02" exists ...
	I1001 18:28:23.218601   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:23.218636   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.231726   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I1001 18:28:23.232166   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.232591   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.232609   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.232893   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.233100   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .DriverName
	I1001 18:28:23.233286   39703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:28:23.233309   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetSSHHostname
	I1001 18:28:23.236476   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | domain multinode-388877-m02 has defined MAC address 52:54:00:f1:85:46 in network mk-multinode-388877
	I1001 18:28:23.236997   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:85:46", ip: ""} in network mk-multinode-388877: {Iface:virbr1 ExpiryTime:2025-10-01 19:26:56 +0000 UTC Type:0 Mac:52:54:00:f1:85:46 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-388877-m02 Clientid:01:52:54:00:f1:85:46}
	I1001 18:28:23.237027   39703 main.go:141] libmachine: (multinode-388877-m02) DBG | domain multinode-388877-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:f1:85:46 in network mk-multinode-388877
	I1001 18:28:23.237213   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetSSHPort
	I1001 18:28:23.237355   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetSSHKeyPath
	I1001 18:28:23.237524   39703 main.go:141] libmachine: (multinode-388877-m02) Calling .GetSSHUsername
	I1001 18:28:23.237663   39703 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21631-9542/.minikube/machines/multinode-388877-m02/id_rsa Username:docker}
	I1001 18:28:23.316218   39703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:28:23.332394   39703 status.go:176] multinode-388877-m02 status: &{Name:multinode-388877-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:28:23.332440   39703 status.go:174] checking status of multinode-388877-m03 ...
	I1001 18:28:23.332750   39703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:28:23.332787   39703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:28:23.346991   39703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I1001 18:28:23.347572   39703 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:28:23.348158   39703 main.go:141] libmachine: Using API Version  1
	I1001 18:28:23.348180   39703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:28:23.348528   39703 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:28:23.348783   39703 main.go:141] libmachine: (multinode-388877-m03) Calling .GetState
	I1001 18:28:23.350468   39703 status.go:371] multinode-388877-m03 host status = "Stopped" (err=<nil>)
	I1001 18:28:23.350481   39703 status.go:384] host is not running, skipping remaining checks
	I1001 18:28:23.350486   39703 status.go:176] multinode-388877-m03 status: &{Name:multinode-388877-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-388877 node start m03 -v=5 --alsologtostderr: (38.384806922s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-388877
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-388877
E1001 18:29:09.889582   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:29:29.921537   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:31:06.822109   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-388877: (2m53.69613179s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-388877 --wait=true -v=5 --alsologtostderr
E1001 18:34:29.919192   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-388877 --wait=true -v=5 --alsologtostderr: (2m35.321890745s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-388877
--- PASS: TestMultiNode/serial/RestartKeepsNodes (329.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-388877 node delete m03: (2.199841491s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (165.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 stop
E1001 18:36:06.822386   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-388877 stop: (2m45.332762852s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-388877 status: exit status 7 (80.565552ms)

                                                
                                                
-- stdout --
	multinode-388877
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-388877-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr: exit status 7 (79.214299ms)

                                                
                                                
-- stdout --
	multinode-388877
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-388877-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:37:19.684113   42583 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:37:19.684334   42583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:37:19.684342   42583 out.go:374] Setting ErrFile to fd 2...
	I1001 18:37:19.684346   42583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:37:19.684552   42583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:37:19.684730   42583 out.go:368] Setting JSON to false
	I1001 18:37:19.684752   42583 mustload.go:65] Loading cluster: multinode-388877
	I1001 18:37:19.684912   42583 notify.go:220] Checking for updates...
	I1001 18:37:19.685115   42583 config.go:182] Loaded profile config "multinode-388877": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:37:19.685132   42583 status.go:174] checking status of multinode-388877 ...
	I1001 18:37:19.685601   42583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:37:19.685643   42583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:37:19.698572   42583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I1001 18:37:19.699092   42583 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:37:19.699748   42583 main.go:141] libmachine: Using API Version  1
	I1001 18:37:19.699788   42583 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:37:19.700104   42583 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:37:19.700319   42583 main.go:141] libmachine: (multinode-388877) Calling .GetState
	I1001 18:37:19.702223   42583 status.go:371] multinode-388877 host status = "Stopped" (err=<nil>)
	I1001 18:37:19.702241   42583 status.go:384] host is not running, skipping remaining checks
	I1001 18:37:19.702247   42583 status.go:176] multinode-388877 status: &{Name:multinode-388877 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:37:19.702271   42583 status.go:174] checking status of multinode-388877-m02 ...
	I1001 18:37:19.702676   42583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 18:37:19.702720   42583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 18:37:19.715673   42583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1001 18:37:19.716156   42583 main.go:141] libmachine: () Calling .GetVersion
	I1001 18:37:19.716644   42583 main.go:141] libmachine: Using API Version  1
	I1001 18:37:19.716667   42583 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 18:37:19.716997   42583 main.go:141] libmachine: () Calling .GetMachineName
	I1001 18:37:19.717173   42583 main.go:141] libmachine: (multinode-388877-m02) Calling .GetState
	I1001 18:37:19.718699   42583 status.go:371] multinode-388877-m02 host status = "Stopped" (err=<nil>)
	I1001 18:37:19.718711   42583 status.go:384] host is not running, skipping remaining checks
	I1001 18:37:19.718716   42583 status.go:176] multinode-388877-m02 status: &{Name:multinode-388877-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (165.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-388877 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:37:32.985745   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-388877 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.431119058s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-388877 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-388877
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-388877-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-388877-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (63.973691ms)

                                                
                                                
-- stdout --
	* [multinode-388877-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-388877-m02' is duplicated with machine name 'multinode-388877-m02' in profile 'multinode-388877'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-388877-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:39:29.918573   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-388877-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.952004418s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-388877
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-388877: exit status 80 (220.198015ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-388877 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-388877-m03 already exists in multinode-388877-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-388877-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.10s)

                                                
                                    
x
+
TestScheduledStopUnix (107.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-899156 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-899156 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.676176776s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899156 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-899156 -n scheduled-stop-899156
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899156 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 18:42:55.379401   13469 retry.go:31] will retry after 103.285µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.380594   13469 retry.go:31] will retry after 128.084µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.381736   13469 retry.go:31] will retry after 215.786µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.382885   13469 retry.go:31] will retry after 445.26µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.384007   13469 retry.go:31] will retry after 324.489µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.385131   13469 retry.go:31] will retry after 752.97µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.386255   13469 retry.go:31] will retry after 845.778µs: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.387387   13469 retry.go:31] will retry after 1.052399ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.389557   13469 retry.go:31] will retry after 1.857136ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.391749   13469 retry.go:31] will retry after 3.529291ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.395939   13469 retry.go:31] will retry after 4.486654ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.401170   13469 retry.go:31] will retry after 6.381231ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.408423   13469 retry.go:31] will retry after 17.534068ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.426709   13469 retry.go:31] will retry after 10.588255ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.438071   13469 retry.go:31] will retry after 22.829598ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
I1001 18:42:55.461341   13469 retry.go:31] will retry after 29.751797ms: open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/scheduled-stop-899156/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899156 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899156 -n scheduled-stop-899156
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-899156
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899156 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-899156
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-899156: exit status 7 (62.036657ms)

                                                
                                                
-- stdout --
	scheduled-stop-899156
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899156 -n scheduled-stop-899156
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899156 -n scheduled-stop-899156: exit status 7 (66.287414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-899156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-899156
--- PASS: TestScheduledStopUnix (107.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (97.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1793657665 start -p running-upgrade-857786 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1793657665 start -p running-upgrade-857786 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.664299225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-857786 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-857786 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.851222008s)
helpers_test.go:175: Cleaning up "running-upgrade-857786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-857786
--- PASS: TestRunningBinaryUpgrade (97.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (168.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.723128083s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-130620
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-130620: (1.799802887s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-130620 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-130620 status --format={{.Host}}: exit status 7 (64.34868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.452168707s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-130620 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (79.086192ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-130620] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-130620
	    minikube start -p kubernetes-upgrade-130620 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1306202 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-130620 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-130620 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.422220818s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-130620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-130620
--- PASS: TestKubernetesUpgrade (168.54s)

                                                
                                    
x
+
TestPause/serial/Start (79.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145303 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-145303 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.6162749s)
--- PASS: TestPause/serial/Start (79.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (81.413034ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-180525] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-180525 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-180525 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.897225987s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-180525 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3979788010 start -p stopped-upgrade-149070 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:44:29.919337   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3979788010 start -p stopped-upgrade-149070 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.291842132s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3979788010 -p stopped-upgrade-149070 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3979788010 -p stopped-upgrade-149070 stop: (1.475964089s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-149070 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-149070 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.840502905s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:45:49.891712   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (34.773510871s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-180525 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-180525 status -o json: exit status 2 (296.644097ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-180525","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-180525
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:46:06.816697   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-180525 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (33.323277246s)
--- PASS: TestNoKubernetes/serial/Start (33.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-149070
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-149070: (1.229922092s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-180525 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-180525 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.432907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.315941299s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.392588592s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-180525
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-180525: (1.273086373s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-180525 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-180525 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.522461882s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (61.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-180525 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-180525 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.888386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-371776 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-371776 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (116.357101ms)

                                                
                                                
-- stdout --
	* [false-371776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:47:53.871998   50939 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:47:53.872324   50939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:47:53.872336   50939 out.go:374] Setting ErrFile to fd 2...
	I1001 18:47:53.872341   50939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:47:53.872557   50939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-9542/.minikube/bin
	I1001 18:47:53.873189   50939 out.go:368] Setting JSON to false
	I1001 18:47:53.874210   50939 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5418,"bootTime":1759339056,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 18:47:53.874298   50939 start.go:140] virtualization: kvm guest
	I1001 18:47:53.876968   50939 out.go:179] * [false-371776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1001 18:47:53.878190   50939 notify.go:220] Checking for updates...
	I1001 18:47:53.878235   50939 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:47:53.879651   50939 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:47:53.881141   50939 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-9542/kubeconfig
	I1001 18:47:53.882478   50939 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-9542/.minikube
	I1001 18:47:53.883807   50939 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 18:47:53.885180   50939 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:47:53.887281   50939 config.go:182] Loaded profile config "cert-expiration-252396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:47:53.887404   50939 config.go:182] Loaded profile config "kubernetes-upgrade-130620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:47:53.887531   50939 config.go:182] Loaded profile config "running-upgrade-857786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1001 18:47:53.887677   50939 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:47:53.923419   50939 out.go:179] * Using the kvm2 driver based on user configuration
	I1001 18:47:53.924666   50939 start.go:304] selected driver: kvm2
	I1001 18:47:53.924682   50939 start.go:921] validating driver "kvm2" against <nil>
	I1001 18:47:53.924694   50939 start.go:932] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:47:53.926719   50939 out.go:203] 
	W1001 18:47:53.928232   50939 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1001 18:47:53.932892   50939 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-371776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.179:8443
name: kubernetes-upgrade-130620
contexts:
- context:
cluster: kubernetes-upgrade-130620
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-130620
name: kubernetes-upgrade-130620
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-130620
user:
client-certificate: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.crt
client-key: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-371776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-371776"

                                                
                                                
----------------------- debugLogs end: false-371776 [took: 2.979335834s] --------------------------------
helpers_test.go:175: Cleaning up "false-371776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-371776
--- PASS: TestNetworkPlugins/group/false (3.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (99.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-264356 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-264356 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m39.862048946s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (99.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-867270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-867270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m29.341223049s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-223616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1001 18:49:29.919267   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-223616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m13.021490138s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264356 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0be5b2c7-f2e3-4856-bcd3-3e124037c2df] Pending
helpers_test.go:352: "busybox" [0be5b2c7-f2e3-4856-bcd3-3e124037c2df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0be5b2c7-f2e3-4856-bcd3-3e124037c2df] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004333204s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-264356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-264356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.242717239s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-264356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (73.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-264356 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-264356 --alsologtostderr -v=3: (1m13.914083233s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (73.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-867270 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [634a6663-bbcb-483e-9519-6483fa551142] Pending
helpers_test.go:352: "busybox" [634a6663-bbcb-483e-9519-6483fa551142] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [634a6663-bbcb-483e-9519-6483fa551142] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004264478s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-867270 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-223616 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3fa4d2bd-25aa-4b78-8a7b-d28251f77446] Pending
helpers_test.go:352: "busybox" [3fa4d2bd-25aa-4b78-8a7b-d28251f77446] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3fa4d2bd-25aa-4b78-8a7b-d28251f77446] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003758636s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-223616 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-867270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-867270 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-867270 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-867270 --alsologtostderr -v=3: (1m28.230285009s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-223616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-223616 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-223616 --alsologtostderr -v=3
E1001 18:51:06.816169   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-223616 --alsologtostderr -v=3: (1m29.754204984s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264356 -n old-k8s-version-264356
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264356 -n old-k8s-version-264356: exit status 7 (76.699021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-264356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-264356 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-264356 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.860225165s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264356 -n old-k8s-version-264356
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6brxn" [b130d2c2-ac03-4844-9844-84c2b868cfb0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6brxn" [b130d2c2-ac03-4844-9844-84c2b868cfb0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004375767s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867270 -n no-preload-867270
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867270 -n no-preload-867270: exit status 7 (77.682685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-867270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-867270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-867270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (58.180720114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867270 -n no-preload-867270
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6brxn" [b130d2c2-ac03-4844-9844-84c2b868cfb0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006394375s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-264356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616: exit status 7 (65.151599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-223616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-223616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-223616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (52.126810803s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-264356 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-264356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264356 -n old-k8s-version-264356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264356 -n old-k8s-version-264356: exit status 2 (249.713083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264356 -n old-k8s-version-264356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264356 -n old-k8s-version-264356: exit status 2 (243.195881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-264356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264356 -n old-k8s-version-264356
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264356 -n old-k8s-version-264356
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-961335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-961335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m4.860096171s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wjl6s" [912ecf4f-4bb8-46e2-9fb1-3d0ae50de4ca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.176960405s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xrst" [452a2a8e-74af-4b83-99e7-3ab54c6318ea] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xrst" [452a2a8e-74af-4b83-99e7-3ab54c6318ea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004662148s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wjl6s" [912ecf4f-4bb8-46e2-9fb1-3d0ae50de4ca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003847664s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-867270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-867270 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-867270 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-867270 --alsologtostderr -v=1: (1.020408738s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867270 -n no-preload-867270
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867270 -n no-preload-867270: exit status 2 (338.871052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867270 -n no-preload-867270
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867270 -n no-preload-867270: exit status 2 (339.71011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-867270 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867270 -n no-preload-867270
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867270 -n no-preload-867270
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4xrst" [452a2a8e-74af-4b83-99e7-3ab54c6318ea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005210224s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-223616 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-632213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-632213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m18.556930618s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-223616 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-223616 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616: exit status 2 (275.016659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616: exit status 2 (253.47294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-223616 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-223616 -n default-k8s-diff-port-223616
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.179598343s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-961335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-961335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03329714s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-961335 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-961335 --alsologtostderr -v=3: (7.01270195s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961335 -n newest-cni-961335
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961335 -n newest-cni-961335: exit status 7 (63.26188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-961335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (56.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-961335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1001 18:54:12.987525   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:29.919443   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-961335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.505914616s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961335 -n newest-cni-961335
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (56.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-961335 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-961335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961335 -n newest-cni-961335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961335 -n newest-cni-961335: exit status 2 (252.469238ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961335 -n newest-cni-961335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961335 -n newest-cni-961335: exit status 2 (272.84748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-961335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961335 -n newest-cni-961335
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961335 -n newest-cni-961335
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.374726067s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-632213 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [47563531-abb7-4e80-aad1-1c08da5cf345] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [47563531-abb7-4e80-aad1-1c08da5cf345] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004189333s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-632213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-632213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1001 18:54:56.250488   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.256952   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.268357   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.289817   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.331321   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.413058   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:56.574676   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-632213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077791977s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-632213 describe deploy/metrics-server -n kube-system
E1001 18:54:56.896690   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-632213 --alsologtostderr -v=3
E1001 18:54:57.538818   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:54:58.820549   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:55:01.382173   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:55:06.504002   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-632213 --alsologtostderr -v=3: (1m25.72229603s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-371776 "pgrep -a kubelet"
E1001 18:55:16.745460   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1001 18:55:16.794503   13469 config.go:182] Loaded profile config "auto-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4zdw9" [48e4f630-d4e4-4099-af4d-b8b40eee6591] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4zdw9" [48e4f630-d4e4-4099-af4d-b8b40eee6591] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004807638s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:55:47.505084   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:55:51.747703   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:55:57.746909   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:56:06.816486   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.505945317s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-9298q" [8de0b0c7-f891-4c39-ac09-ea0318611108] Running
E1001 18:56:12.229651   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005550406s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-371776 "pgrep -a kubelet"
I1001 18:56:16.032353   13469 config.go:182] Loaded profile config "kindnet-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lvkng" [7c0b3db6-f0a6-4a86-aec4-c24e6f2b5461] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 18:56:18.189337   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:56:18.228857   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lvkng" [7c0b3db6-f0a6-4a86-aec4-c24e6f2b5461] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003791613s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632213 -n embed-certs-632213
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632213 -n embed-certs-632213: exit status 7 (78.540355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-632213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (43.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-632213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-632213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (43.269938665s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632213 -n embed-certs-632213
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (43.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.758317636s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-nzwdr" [d1e33734-fc83-484e-8f85-017108bf34d4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00816865s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-371776 "pgrep -a kubelet"
I1001 18:56:53.073874   13469 config.go:182] Loaded profile config "calico-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-371776 replace --force -f testdata/netcat-deployment.yaml
E1001 18:56:53.191655   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pdwdv" [55ec5279-447a-47f0-9cfc-1916cdeb8644] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 18:56:59.190927   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pdwdv" [55ec5279-447a-47f0-9cfc-1916cdeb8644] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004040777s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8cb5" [7f8aae69-d493-4869-ba4c-1bc48237ecaf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8cb5" [7f8aae69-d493-4869-ba4c-1bc48237ecaf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.003849745s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.83840557s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8cb5" [7f8aae69-d493-4869-ba4c-1bc48237ecaf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003745535s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-632213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-632213 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-632213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-632213 --alsologtostderr -v=1: (1.053311479s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632213 -n embed-certs-632213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632213 -n embed-certs-632213: exit status 2 (266.049021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632213 -n embed-certs-632213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632213 -n embed-certs-632213: exit status 2 (258.981801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-632213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632213 -n embed-certs-632213
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632213 -n embed-certs-632213
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1001 18:57:40.111314   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.572379798s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-371776 "pgrep -a kubelet"
I1001 18:57:59.075263   13469 config.go:182] Loaded profile config "custom-flannel-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ss6ph" [714d3c49-394d-43f1-b645-8a23f461a62a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ss6ph" [714d3c49-394d-43f1-b645-8a23f461a62a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005133779s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-371776 "pgrep -a kubelet"
I1001 18:58:18.457154   13469 config.go:182] Loaded profile config "enable-default-cni-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xbgvg" [17615ff1-f27e-4e48-8366-e52c76170587] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 18:58:21.112745   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xbgvg" [17615ff1-f27e-4e48-8366-e52c76170587] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004316572s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-371776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.648452768s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rb8zr" [4e37cfd2-5315-4079-8b53-5b2096392755] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00672809s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-371776 "pgrep -a kubelet"
I1001 18:59:00.545333   13469 config.go:182] Loaded profile config "flannel-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vjr75" [ee87aa37-a01e-4325-82db-2f35e3d89e3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vjr75" [ee87aa37-a01e-4325-82db-2f35e3d89e3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003785629s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-371776 "pgrep -a kubelet"
E1001 18:59:56.250536   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1001 18:59:56.420503   13469 config.go:182] Loaded profile config "bridge-371776": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-371776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t6hb2" [c0c60b91-ec20-4502-a262-666ef64a19ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t6hb2" [c0c60b91-ec20-4502-a262-666ef64a19ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003903462s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-371776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-371776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E1001 19:00:23.953585   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:27.279889   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:31.251595   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:37.250618   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:37.521242   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:58.003361   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:00:58.955926   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:04.954291   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:06.816278   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.722617   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.729076   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.740586   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.762040   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.803476   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:09.884900   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:10.046627   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:10.368343   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:11.010495   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:12.292280   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:14.854509   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:19.976127   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:30.218386   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:38.964879   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.765670   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.772060   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.783467   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.804851   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.846415   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:46.927840   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:47.089625   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:47.411543   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:48.053534   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:49.335675   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:50.699778   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:51.897627   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:57.019681   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:07.261825   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:27.743625   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:29.893674   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/addons-289249/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:31.661202   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.307913   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.314361   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.325731   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.347143   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.388526   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.469991   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.631522   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:02:59.953683   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:00.595724   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:00.886422   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:01.876992   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:04.438663   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:08.706268   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:09.561027   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.712349   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.718818   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.730276   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.751843   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.793344   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:18.874889   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:19.036534   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:19.358545   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:19.802645   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:20.000504   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:21.282468   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:23.844563   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:28.966788   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:39.208756   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:40.284471   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:53.584308   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kindnet-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.332832   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.339164   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.350498   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.371846   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.413255   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.494692   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.656277   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:54.978020   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:55.620074   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:56.901383   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:59.463188   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:03:59.690868   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:04.584572   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:14.826346   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:21.246356   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:29.918989   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/functional-042563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:30.628011   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/calico-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:35.308292   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:40.652870   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/enable-default-cni-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.251636   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/old-k8s-version-264356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.668550   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.675003   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.686403   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.707790   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.749216   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.831091   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:56.992676   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:57.314713   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:57.956803   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:04:59.238563   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:01.800374   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:06.921756   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:16.270112   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:17.024992   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:17.163560   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:31.251830   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/no-preload-867270/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:37.250621   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/default-k8s-diff-port-223616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:37.645522   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/bridge-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:43.169333   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/custom-flannel-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:05:44.728089   13469 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/auto-371776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
275 TestStartStop/group/disable-driver-mounts 0.21
279 TestNetworkPlugins/group/kubenet 2.91
287 TestNetworkPlugins/group/cilium 3.43
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-289249 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-354825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-354825
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-371776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.179:8443
name: kubernetes-upgrade-130620
contexts:
- context:
cluster: kubernetes-upgrade-130620
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-130620
name: kubernetes-upgrade-130620
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-130620
user:
client-certificate: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.crt
client-key: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-371776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-371776"

                                                
                                                
----------------------- debugLogs end: kubenet-371776 [took: 2.751424824s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-371776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-371776
--- SKIP: TestNetworkPlugins/group/kubenet (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-371776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-371776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-9542/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.179:8443
name: kubernetes-upgrade-130620
contexts:
- context:
cluster: kubernetes-upgrade-130620
extensions:
- extension:
last-update: Wed, 01 Oct 2025 18:47:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-130620
name: kubernetes-upgrade-130620
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-130620
user:
client-certificate: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.crt
client-key: /home/jenkins/minikube-integration/21631-9542/.minikube/profiles/kubernetes-upgrade-130620/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-371776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-371776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-371776"

                                                
                                                
----------------------- debugLogs end: cilium-371776 [took: 3.28185927s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-371776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-371776
--- SKIP: TestNetworkPlugins/group/cilium (3.43s)

                                                
                                    
Copied to clipboard